Search Results: "tina"

19 December 2023

Antoine Beaupr : (Re)introducing screentest

I have accidentally rewritten screentest, an old X11/GTK2 program that I was previously using to, well, test screens.

Screentest is dead It was removed from Debian in May 2023 but had already missed two releases (Debian 11 "bullseye" and 12 "bookworm") due to release critical bugs. The stated reason for removal was:
The package is orphaned and its upstream is no longer developed. It depends on gtk2, has a low popcon and no reverse dependencies.
So I had little hope to see this program back in Debian. The git repository shows little activity, the last being two years ago. Interestingly, I do not quite remember what it was testing, but I do remember it to find dead pixels, confirm native resolution, and various pixel-peeping. Here's a screenshot of one of the screentest screens: screentest screenshot showing a white-on-black checkered background, with some circles in the corners, shades of gray and colors in the middle Now, I think it's safe to assume this program is dead and buried, and anyways I'm running wayland now, surely there's something better? Well, no. Of course not. Someone would know about it and tell me before I go on a random coding spree in a fit of procrastination... riiight? At least, the Debconf video team didn't seem to know of any replacement. They actually suggested I just "invoke gstreamer directly" and "embrace the joy of shell scripting".

Screentest reborn So, I naively did exactly that and wrote a horrible shell script. Then I realized the next step was to write an command line parser and monitor geometry guessing, and thought "NOPE, THIS IS WHERE THE SHELL STOPS", and rewrote the whole thing in Python. Now, screentest lives as a ~400-line Python script, half of which is unit test data and command-line parsing.

Why screentest Some smarty pants is going to complain and ask why the heck one would need something like that (and, well, someone already did), so maybe I can lay down a list of use case:
  • testing color output, in broad terms (answering the question of "is it just me or this project really yellow?")
  • testing focus and keystone ("this looks blurry, can you find a nice sharp frame in that movie to adjust focus?")
  • test for native resolution and sharpness ("does this projector really support 4k for 30$? that sounds like bullcrap")
  • looking for dead pixels ("i have a new monitor, i hope it's intact")

What does screentest do? Screentest displays a series of "patterns" on screen. The list of patterns is actually hardcoded in the script, copy-pasted from this list from the videotestsrc gstreamer plugin, but you can pass any pattern supported by your gstreamer installation with --patterns. A list of patterns relevant to your installation is available with the gst-inspect-1.0 videotestsrc command. By default, screentest goes through all patterns. Each pattern runs indefinitely until the you close the window, then the next pattern starts. You can restrict to a subset of patterns, for example this would be a good test for dead pixels:
screentest --patterns black,white,red,green,blue
This would be a good sharpness test:
screentest --patterns pinwheel,spokes,checkers-1,checkers-2,checkers-4,checkers-8
A good generic test is the classic SMPTE color bars and is the first in the list, but you can run only that test with:
screentest --patterns smpte
(I will mention, by the way, that as a system administrator with decades of experience, it is nearly impossible to type SMPTE without first typing SMTP and re-typing it again a few times before I get it right. I fully expect this post to have numerous typos.)
Here's an example of the SMPTE pattern from Wikipedia: SMPTE color bars For multi-monitor setups, screentest also supports specifying which output to use as a native resolution, with --output. Failing that, it will try to look at the outputs and use the first it will find. If it fails to find anything, you can specify a resolution with --resolution WIDTHxHEIGHT. I have tried to make it go full screen by default, but stumbled a bug in Sway that crashes gst-launch. If your Wayland compositor supports it, you can possibly enable full screen with --sink waylandsink fullscreen=true. Otherwise it will create a new window that you will have to make fullscreen yourself. For completeness, there's also an --audio flag that will emit the classic "drone", a sine wave at 440Hz at 40% volume (the audiotestsrc gstreamer plugin. And there's a --overlay-name option to show the pattern name, in case you get lost and want to start with one of them again.

How this works Most of the work is done by gstreamer. The script merely generates a pipeline and calls gst-launch to show the output. That both limits what it can do but also makes it much easier to use than figuring out gst-launch. There might be some additional patterns that could be useful, but I think those are better left to gstreamer. I, for example, am somewhat nostalgic of the Philips circle pattern that used to play for TV stations that were off-air in my area. But that, in my opinion, would be better added to the gstreamer plugin than into a separate thing. The script shows which command is being ran, so it's a good introduction to gstreamer pipelines. Advanced users (and the video team) will possibly not need screentest and will design their own pipelines with their own tools. I've previously worked with ffmpeg pipelines (in another such procrastinated coding spree, video-proxy-magic), and I found gstreamer more intuitive, even though it might be slightly less powerful. In retrospect, I should probably have picked a new name, to avoid crashing the namespace already used by the project, which is now on GitHub. Who knows, it might come back to life after this blog post; it would not be the first time. For now, the project lives along side the rest of my scripts collection but if there's sufficient interest, I might move it to its own git repositories. Comments, feedback, contributions are as usual welcome. And naturally, if you know something better for this kind of stuff, I'm happy to learn more about your favorite tool! So now I have finally found something to test my projector, which will likely confirm what I've already known all along: that it's kind of a piece of crap and I need to get a proper one.

9 December 2023

Simon Josefsson: Classic McEliece goes to IETF and OpenSSH

My earlier work on Streamlined NTRU Prime has been progressing along. The IETF document on sntrup761 in SSH has passed several process points. GnuPG s libgcrypt has added support for sntrup761. The libssh support for sntrup761 is working, but the merge request is stuck mostly due to lack of time to debug why the regression test suite sporadically errors out in non-sntrup761 related parts with the patch. The foundation for lattice-based post-quantum algorithms has some uncertainty around it, and I have felt that there is more to the post-quantum story than adding sntrup761 to implementations. Classic McEliece has been mentioned to me a couple of times, and I took some time to learn it and did a cut n paste job of the proposed ISO standard and published draft-josefsson-mceliece in the IETF to make the algorithm easily available to the IETF community. A high-quality implementation of Classic McEliece has been published as libmceliece and I ve been supporting the work of Jan Moj to package libmceliece for Debian, alas it has been stuck in the ftp-master NEW queue for manual review for over two months. The pre-dependencies librandombytes and libcpucycles are available in Debian already. All that text writing and packaging work set the scene to write some code. When I added support for sntrup761 in libssh, I became familiar with the OpenSSH code base, so it was natural to return to OpenSSH to experiment with a new SSH KEX for Classic McEliece. DJB suggested to pick mceliece6688128 and combine it with the existing X25519+sntrup761 or with plain X25519. While a three-algorithm hybrid between X25519, sntrup761 and mceliece6688128 would be a simple drop-in for those that don t want to lose the benefits offered by sntrup761, I decided to start the journey on a pure combination of X25519 with mceliece6688128. The key combiner in sntrup761x25519 is a simple SHA512 call and the only good I can say about that is that it is simple to describe and implement, and doesn t raise too many questions since it is already deployed. After procrastinating coding for months, once I sat down to work it only took a couple of hours until I had a successful Classic McEliece SSH connection. I suppose my brain had sorted everything in background before I started. To reproduce it, please try the following in a Debian testing environment (I use podman to get a clean environment).
# podman run -it --rm debian:testing-slim
apt update
apt dist-upgrade -y
apt install -y wget python3 librandombytes-dev libcpucycles-dev gcc make git autoconf libz-dev libssl-dev
cd ~
wget -q -O- https://lib.mceliece.org/libmceliece-20230612.tar.gz   tar xfz -
cd libmceliece-20230612/
./configure
make install
ldconfig
cd ..
git clone https://gitlab.com/jas/openssh-portable
cd openssh-portable
git checkout jas/mceliece
autoreconf
./configure # verify 'libmceliece support: yes'
make # CC="cc -DDEBUG_KEX=1 -DDEBUG_KEXDH=1 -DDEBUG_KEXECDH=1"
You should now have a working SSH client and server that supports Classic McEliece! Verify support by running ./ssh -Q kex and it should mention mceliece6688128x25519-sha512@openssh.com. To have it print plenty of debug outputs, you may remove the # character on the final line, but don t use such a build in production. You can test it as follows:
./ssh-keygen -A # writes to /usr/local/etc/ssh_host_...
# setup public-key based login by running the following:
./ssh-keygen -t rsa -f ~/.ssh/id_rsa -P ""
cat ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys
adduser --system sshd
mkdir /var/empty
while true; do $PWD/sshd -p 2222 -f /dev/null; done &
./ssh -v -p 2222 localhost -oKexAlgorithms=mceliece6688128x25519-sha512@openssh.com date
On the client you should see output like this:
OpenSSH_9.5p1, OpenSSL 3.0.11 19 Sep 2023
...
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: algorithm: mceliece6688128x25519-sha512@openssh.com
debug1: kex: host key algorithm: ssh-ed25519
debug1: kex: server->client cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none
debug1: kex: client->server cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
debug1: SSH2_MSG_KEX_ECDH_REPLY received
debug1: Server host key: ssh-ed25519 SHA256:YognhWY7+399J+/V8eAQWmM3UFDLT0dkmoj3pIJ0zXs
...
debug1: Host '[localhost]:2222' is known and matches the ED25519 host key.
debug1: Found key in /root/.ssh/known_hosts:1
debug1: rekey out after 134217728 blocks
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: rekey in after 134217728 blocks
...
debug1: Sending command: date
debug1: pledge: fork
debug1: permanently_set_uid: 0/0
Environment:
  USER=root
  LOGNAME=root
  HOME=/root
  PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin
  MAIL=/var/mail/root
  SHELL=/bin/bash
  SSH_CLIENT=::1 46894 2222
  SSH_CONNECTION=::1 46894 ::1 2222
debug1: client_input_channel_req: channel 0 rtype exit-status reply 0
debug1: client_input_channel_req: channel 0 rtype eow@openssh.com reply 0
Sat Dec  9 22:22:40 UTC 2023
debug1: channel 0: free: client-session, nchannels 1
Transferred: sent 1048044, received 3500 bytes, in 0.0 seconds
Bytes per second: sent 23388935.4, received 78108.6
debug1: Exit status 0
Notice the kex: algorithm: mceliece6688128x25519-sha512@openssh.com output. How about network bandwidth usage? Below is a comparison of a complete SSH client connection such as the one above that log in and print date and logs out. Plain X25519 is around 7kb, X25519 with sntrup761 is around 9kb, and mceliece6688128 with X25519 is around 1MB. Yes, Classic McEliece has large keys, but for many environments, 1MB of data for the session establishment will barely be noticeable.
./ssh -v -p 2222 localhost -oKexAlgorithms=curve25519-sha256 date 2>&1   grep ^Transferred
Transferred: sent 3028, received 3612 bytes, in 0.0 seconds
./ssh -v -p 2222 localhost -oKexAlgorithms=sntrup761x25519-sha512@openssh.com date 2>&1   grep ^Transferred
Transferred: sent 4212, received 4596 bytes, in 0.0 seconds
./ssh -v -p 2222 localhost -oKexAlgorithms=mceliece6688128x25519-sha512@openssh.com date 2>&1   grep ^Transferred
Transferred: sent 1048044, received 3764 bytes, in 0.0 seconds
So how about session establishment time?
date; i=0; while test $i -le 100; do ./ssh -v -p 2222 localhost -oKexAlgorithms=curve25519-sha256 date > /dev/null 2>&1; i= expr $i + 1 ; done; date
Sat Dec  9 22:39:19 UTC 2023
Sat Dec  9 22:39:25 UTC 2023
# 6 seconds
date; i=0; while test $i -le 100; do ./ssh -v -p 2222 localhost -oKexAlgorithms=sntrup761x25519-sha512@openssh.com date > /dev/null 2>&1; i= expr $i + 1 ; done; date
Sat Dec  9 22:39:29 UTC 2023
Sat Dec  9 22:39:38 UTC 2023
# 9 seconds
date; i=0; while test $i -le 100; do ./ssh -v -p 2222 localhost -oKexAlgorithms=mceliece6688128x25519-sha512@openssh.com date > /dev/null 2>&1; i= expr $i + 1 ; done; date
Sat Dec  9 22:39:55 UTC 2023
Sat Dec  9 22:40:07 UTC 2023
# 12 seconds
I never noticed adding sntrup761, so I m pretty sure I wouldn t notice this increase either. This is all running on my laptop that runs Trisquel so take it with a grain of salt but at least the magnitude is clear. Future work items include: Happy post-quantum SSH ing! Update: Changing the mceliece6688128_keypair call to mceliece6688128f_keypair (i.e., using the fully compatible f-variant) results in McEliece being just as fast as sntrup761 on my machine. Update 2023-12-26: An initial IETF document draft-josefsson-ssh-mceliece-00 published.

26 November 2023

Niels Thykier: Providing online reference documentation for debputy

I do not think seasoned Debian contributors quite appreciate how much knowledge we have picked up and internalized. As an example, when I need to look up documentation for debhelper, I generally know which manpage to look in. I suspect most long time contributors would be able to a similar thing (maybe down 2-3 manpages). But new contributors does not have the luxury of years of experience. This problem is by no means unique to debhelper. One thing that debhelper does very well, is that it is hard for users to tell where a addon "starts" and debhelper "ends". It is clear you use addons, but the transition in and out of third party provided tools is generally smooth. This is a sign that things "just work(tm)". Except when it comes to documentation. Here, debhelper's static documentation does not include documentation for third party tooling. If you think from a debhelper maintainer's perspective, this seems obvious. Embedding documentation for all the third-party code would be very hard work, a layer-violation, etc.. But from a user perspective, we should not have to care "who" provides "what". As as user, I want to understand how this works and the more hoops I have to jump through to get that understanding, the more frustrated I will be with the toolstack. With this, I came to the conclusion that the best way to help users and solve the problem of finding the documentation was to provide "online documentation". It should be possible to ask debputy, "What attributes can I use in install-man?" or "What does path-metadata do?". Additionally, the lookup should work the same no matter if debputy provided the feature or some third-party plugin did. In the future, perhaps also other types of documentation such as tutorials or how-to guides. Below, I have some tentative results of my work so far. There are some improvements to be done. Notably, the commands for these documentation features are still treated a "plugin" subcommand features and should probably have its own top level "ask-me-anything" subcommand in the future.
Automatic discard rules Since the introduction of install rules, debputy has included an automatic filter mechanism that prunes out unwanted content. In 0.1.9, these filters have been named "Automatic discard rules" and you can now ask debputy to list them.
$ debputy plugin list automatic-discard-rules
+-----------------------+-------------+
  Name                    Provided By  
+-----------------------+-------------+
  python-cache-files      debputy      
  la-files                debputy      
  backup-files            debputy      
  version-control-paths   debputy      
  gnu-info-dir-file       debputy      
  debian-dir              debputy      
  doxygen-cruft-files     debputy      
+-----------------------+-------------+
For these rules, the provider can both provide a description but also an example of their usage.
$ debputy plugin show automatic-discard-rules la-files
Automatic Discard Rule: la-files
================================
Documentation: Discards any .la files beneath /usr/lib
Example
-------
    /usr/lib/libfoo.la        << Discarded (directly by the rule)
    /usr/lib/libfoo.so.1.0.0
The example is a live example. That is, the provider will provide debputy with a scenario and the expected outcome of that scenario. Here is the concrete code in debputy that registers this example:
api.automatic_discard_rule(
    "la-files",
    _debputy_prune_la_files,
    rule_reference_documentation="Discards any .la files beneath /usr/lib",
    examples=automatic_discard_rule_example(
        "usr/lib/libfoo.la",
        ("usr/lib/libfoo.so.1.0.0", False),
    ),
)
When showing the example, debputy will validate the example matches what the plugin provider intended. Lets say I was to introduce a bug in the code, so that the discard rule no longer worked. Then debputy would start to show the following:
# Output if the code or example is broken
$ debputy plugin show automatic-discard-rules la-files
[...]
Automatic Discard Rule: la-files
================================
Documentation: Discards any .la files beneath /usr/lib
Example
-------
    /usr/lib/libfoo.la        !! INCONSISTENT (code: keep, example: discard)
    /usr/lib/libfoo.so.1.0.0
debputy: warning: The example was inconsistent. Please file a bug against the plugin debputy
Obviously, it would be better if this validation could be added directly as a plugin test, so the CI pipeline would catch it. That is one my personal TODO list. :) One final remark about automatic discard rules before moving on. In 0.1.9, debputy will also list any path automatically discarded by one of these rules in the build output to make sure that the automatic discard rule feature is more discoverable.
Plugable manifest rules like the install rule In the manifest, there are several places where rules can be provided by plugins. To make life easier for users, debputy can now since 0.1.8 list all provided rules:
$ debputy plugin list plugable-manifest-rules
+-------------------------------+------------------------------+-------------+
  Rule Name                       Rule Type                      Provided By  
+-------------------------------+------------------------------+-------------+
  install                         InstallRule                    debputy      
  install-docs                    InstallRule                    debputy      
  install-examples                InstallRule                    debputy      
  install-doc                     InstallRule                    debputy      
  install-example                 InstallRule                    debputy      
  install-man                     InstallRule                    debputy      
  discard                         InstallRule                    debputy      
  move                            TransformationRule             debputy      
  remove                          TransformationRule             debputy      
  [...]                           [...]                          [...]        
  remove                          DpkgMaintscriptHelperCommand   debputy      
  rename                          DpkgMaintscriptHelperCommand   debputy      
  cross-compiling                 ManifestCondition              debputy      
  can-execute-compiled-binaries   ManifestCondition              debputy      
  run-build-time-tests            ManifestCondition              debputy      
  [...]                           [...]                          [...]        
+-------------------------------+------------------------------+-------------+
(Output trimmed a bit for space reasons) And you can then ask debputy to describe any of these rules:
$ debputy plugin show plugable-manifest-rules install
Generic install ( install )
===========================
The generic  install  rule can be used to install arbitrary paths into packages
and is *similar* to how  dh_install  from debhelper works.  It is a two "primary" uses.
  1) The classic "install into directory" similar to the standard  dh_install 
  2) The "install as" similar to  dh-exec 's  foo => bar  feature.
Attributes:
 -  source  (conditional): string
    sources  (conditional): List of string
   A path match ( source ) or a list of path matches ( sources ) defining the
   source path(s) to be installed. [...]
 -  dest-dir  (optional): string
   A path defining the destination *directory*. [...]
 -  into  (optional): string or a list of string
   A path defining the destination *directory*. [...]
 -  as  (optional): string
   A path defining the path to install the source as. [...]
 -  when  (optional): manifest condition (string or mapping of string)
   A condition as defined in [Conditional rules](https://salsa.debian.org/debian/debputy/-/blob/main/MANIFEST-FORMAT.md#Conditional rules).
This rule enforces the following restrictions:
 - The rule must use exactly one of:  source ,  sources 
 - The attribute  as  cannot be used with any of:  dest-dir ,  sources 
[...]
(Output trimmed a bit for space reasons) All the attributes and restrictions are auto-computed by debputy from information provided by the plugin. The associated documentation for each attribute is supplied by the plugin itself, The debputy API validates that all attributes are covered and the documentation does not describe non-existing fields. This ensures that you as a plugin provider never forget to document new attributes when you add them later. The debputy API for manifest rules are not quite stable yet. So currently only debputy provides rules here. However, it is my intention to lift that restriction in the future. I got the idea of supporting online validated examples when I was building this feature. However, sadly, I have not gotten around to supporting it yet.
Manifest variables like PACKAGE I also added a similar documentation feature for manifest variables such as PACKAGE . When I implemented this, I realized listing all manifest variables by default would probably be counter productive to new users. As an example, if you list all variables by default it would include DEB_HOST_MULTIARCH (the most common case) side-by-side with the the much less used DEB_BUILD_MULTIARCH and the even lessor used DEB_TARGET_MULTIARCH variable. Having them side-by-side implies they are of equal importance, which they are not. As an example, the ballpark number of unique packages for which DEB_TARGET_MULTIARCH is useful can be counted on two hands (and maybe two feet if you consider gcc-X distinct from gcc-Y). This is one of the cases, where experience makes us blind. Many of us probably have the "show me everything and I will find what I need" mentality. But that requires experience to be able to pull that off - especially if all alternatives are presented as equals. The cross-building terminology has proven to notoriously match poorly to people's expectation. Therefore, I took a deliberate choice to reduce the list of shown variables by default and in the output explicitly list what filters were active. In the current version of debputy (0.1.9), the listing of manifest-variables look something like this:
$ debputy plugin list manifest-variables
+----------------------------------+----------------------------------------+------+-------------+
  Variable (use via:   NAME  )   Value                                    Flag   Provided by  
+----------------------------------+----------------------------------------+------+-------------+
  DEB_HOST_ARCH                      amd64                                           debputy      
  [... other DEB_HOST_* vars ...]    [...]                                           debputy      
  DEB_HOST_MULTIARCH                 x86_64-linux-gnu                                debputy      
  DEB_SOURCE                         debputy                                         debputy      
  DEB_VERSION                        0.1.8                                           debputy      
  DEB_VERSION_EPOCH_UPSTREAM         0.1.8                                           debputy      
  DEB_VERSION_UPSTREAM               0.1.8                                           debputy      
  DEB_VERSION_UPSTREAM_REVISION      0.1.8                                           debputy      
  PACKAGE                            <package-name>                                  debputy      
  path:BASH_COMPLETION_DIR           /usr/share/bash-completion/completions          debputy      
+----------------------------------+----------------------------------------+------+-------------+
+-----------------------+--------+-------------------------------------------------------+
  Variable type           Value    Option                                                 
+-----------------------+--------+-------------------------------------------------------+
  Token variables         hidden   --show-token-variables OR --show-all-variables         
  Special use variables   hidden   --show-special-case-variables OR --show-all-variables  
+-----------------------+--------+-------------------------------------------------------+
I will probably tweak the concrete listing in the future. Personally, I am considering to provide short-hands variables for some of the DEB_HOST_* variables and then hide the DEB_HOST_* group from the default view as well. Maybe something like ARCH and MULTIARCH, which would default to their DEB_HOST_* counter part. This variable could then have extended documentation that high lights DEB_HOST_<X> as its source and imply that there are special cases for cross-building where you might need DEB_BUILD_<X> or DEB_TARGET_<X>. Speaking of variable documentation, you can also lookup the documentation for a given manifest variable:
$ debputy plugin show manifest-variables path:BASH_COMPLETION_DIR
Variable: path:BASH_COMPLETION_DIR
==================================
Documentation: Directory to install bash completions into
Resolved: /usr/share/bash-completion/completions
Plugin: debputy
This was my update on online reference documentation for debputy. I hope you found it useful. :)
Thanks On a closing note, I would like to thanks Jochen Sprickerhof, Andres Salomon, Paul Gevers for their recent contributions to debputy. Jochen and Paul provided a number of real world cases where debputy would crash or not work, which have now been fixed. Andres and Paul also provided corrections to the documentation.

21 November 2023

Russ Allbery: Review: Thud!

Review: Thud!, by Terry Pratchett
Series: Discworld #34
Publisher: Harper
Copyright: October 2005
Printing: November 2014
ISBN: 0-06-233498-0
Format: Mass market
Pages: 434
Thud! is the 34th Discworld novel and the seventh Watch novel. It is partly a sequel to The Fifth Elephant, partly a sequel to Night Watch, and references many of the previous Watch novels. This is not a good place to start. Dwarfs and trolls have a long history of conflict, as one might expect between a race of creatures who specialize in mining and a race of creatures whose vital organs are sometimes the targets of that mining. The first battle of Koom Valley was the place where that enmity was made concrete and given a symbol. Now that there are large dwarf and troll populations in Ankh-Morpork, the upcoming anniversary of that battle is the excuse for rising tensions. Worse, Grag Hamcrusher, a revered deep-down dwarf and a dwarf supremacist, is giving incendiary speeches about killing all trolls and appears to be tunneling under the city. Then whispers run through the city's dwarfs that Hamcrusher has been murdered by a troll. Vimes has no patience for racial tensions, or for the inspection of the Watch by one of Vetinari's excessively competent clerks, or the political pressure to add a vampire to the Watch over his prejudiced objections. He was already grumpy before the murder and is in absolutely no mood to be told by deep-down dwarfs who barely believe that humans exist that the murder of a dwarf underground is no affair of his. Meanwhile, The Battle of Koom Valley by Methodia Rascal has been stolen from the Ankh-Morpork Royal Art Museum, an impressive feat given that the painting is ten feet high and fifty feet long. It was painted in impressive detail by a madman who thought he was a chicken, and has been the spark for endless theories about clues to some great treasure or hidden knowledge, culminating in the conspiratorial book Koom Valley Codex. But the museum prides itself on allowing people to inspect and photograph the painting to their heart's content and was working on a new room to display it. It's not clear why someone would want to steal it, but Colon and Nobby are on the case. This was a good time to read this novel. Sadly, the same could be said of pretty much every year since it was written. "Thud" in the title is a reference to Hamcrusher's murder, which was supposedly done by a troll club that was found nearby, but it's also a reference to a board game that we first saw in passing in Going Postal. We find out a lot more about Thud in this book. It's an asymmetric two-player board game that simulates a stylized battle between dwarf and troll forces, with one player playing the trolls and the other playing the dwarfs. The obvious comparison is to chess, but a better comparison would be to the old Steve Jackson Games board game Ogre, which also featured asymmetric combat mechanics. (I'm sure there are many others.) This board game will become quite central to the plot of Thud! in ways that I thought were ingenious. I thought this was one of Pratchett's best-plotted books to date. There are a lot of things happening, involving essentially every member of the Watch that we've met in previous books, and they all matter and I was never confused by how they fit together. This book is full of little callbacks and apparently small things that become important later in a way that I found delightful to read, down to the children's book that Vimes reads to his son and that turns into the best scene of the book. At this point in my Discworld read-through, I can see why the Watch books are considered the best sub-series. It feels like Pratchett kicks the quality of writing up a notch when he has Vimes as a protagonist. In several books now, Pratchett has created a villain by taking some human characteristic and turning it into an external force that acts on humans. (See, for instance the Gonne in Men at Arms, or the hiver in A Hat Full of Sky.) I normally do not like this plot technique, both because I think it lets humans off the hook in a way that cheapens the story and because this type of belief has a long and bad reputation in religions where it is used to dodge personal responsibility and dehumanize one's enemies. When another of those villains turned up in this book, I was dubious. But I think Pratchett pulls off this type of villain as well here as I've seen it done. He lifts up a facet of humanity to let the reader get a better view, but somehow makes it explicit that this is concretized metaphor. This force is something people create and feed and choose and therefore are responsible for. The one sour note that I do have to complain about is that Pratchett resorts to some cheap and annoying "men are from Mars, women are from Venus" nonsense, mostly around Nobby's subplot but in a few other places (Sybil, some of Angua's internal monologue) as well. It's relatively minor, and I might let it pass without grumbling in other books, but usually Pratchett is better on gender than this. I expected better and it got under my skin. Otherwise, though, this was a quietly excellent book. It doesn't have the emotional gut punch of Night Watch, but the plotting is superb and the pacing is a significant improvement over The Fifth Elephant. The parody is of The Da Vinci Code, which is both more interesting than Pratchett's typical movie parodies and delightfully subtle. We get more of Sybil being a bad-ass, which I am always here for. There's even some lovely world-building in the form of dwarven Devices. I love how Pratchett has built Vimes up into one of the most deceptively heroic figures on Discworld, but also shows all of the support infrastructure that ensures Vimes maintain his principles. On the surface, Thud! has a lot in common with Vimes's insistently moral stance in Jingo, but here it is more obvious how Vimes's morality happens in part because his wife, his friends, and his boss create the conditions for it to thrive. Highly recommended to anyone who has gotten this far. Rating: 9 out of 10

11 November 2023

Gunnar Wolf: There once was a miniDebConf in Uruguay...

Meeting Debian people for having a good time together, for some good hacking, for learning, for teaching Is always fun and welcome. It brings energy, life and joy. And this year, due to the six-months-long relocation my family and me decided to have to Argentina, I was unable to attend the real deal, DebConf23 at India. And while I know DebConf is an experience like no other, this year I took part in two miniDebConfs. One I have already shared in this same blog: I was in MiniDebConf Tamil Nadu in India, followed by some days of pre-DebConf preparation and scouting in Kochi proper, where I got to interact with the absolutely great and loving team that prepared DebConf. The other one is still ongoing (but close to finishing). Some months ago, I talked with Santiago Ruano, jokin as we were Spanish-speaking DDs announcing to the debian-private mailing list we d be relocating to around R o de la Plata. And things worked out normally: He has been for several months in Uruguay already, so he decided to rent a house for some days, and invite Debian people to do what we do best. I left Paran Tuesday night (and missed my online class at UNAM! Well, you cannot have everything, right?). I arrived early on Wednesday, and around noon came to the house of the keysigning (well, the place is properly called Casa Key , it s a publicity agency that is also rented as a guesthouse in a very nice area of Montevideo, close to Nuevo Pocitos beach). In case you don t know it, Montevideo is on the Northern (or Eastern) shore of R o de la Plata, the widest river in the world (up to 300Km wide, with current and non-salty water). But most important for some Debian contributors: You can even come here by boat! That first evening, we received Ilu, who was in Uruguay by chance for other issues (and we were very happy about it!) and a young and enthusiastic Uruguayan, Felipe, interested in getting involved in Debian. We spent the evening talking about life, the universe and everything Which was a bit tiring, as I had to interface between Spanish and English, talking with two friends that didn t share a common language On Thursday morning, I went out for an early walk at the beach. And lets say, if only just for the narrative, that I found a lost penguin emerging from R o de la Plata! For those that don t know (who d be most of you, as he has not been seen at Debian events for 15 years), that s Lisandro Dami n Nicanor P rez Meyer (or just lisandro), long-time maintainer of the Qt ecosystem, and one of our embedded world extraordinaires. So, after we got him dry and fed him fresh river fishes, he gave us a great impromptu talk about understanding and finding our way around the Device Tree Source files for development boards and similar machines, mostly in the ARM world. From Argentina, we also had Emanuel (eamanu) crossing all the way from La Rioja. I spent most of our first workday getting my laptop in shape to be useful as the driver for my online class on Thursday (which is no small feat people that know the particularities of my much loved ARM-based laptop will understand), and running a set of tests again on my Raspberry Pi labortory, which I had not updated in several months. I am happy to say we are also finally also building Raspberry images for Trixie (Debian 13, Testing)! Sadly, I managed to burn my USB-to-serial-console (UART) adaptor, and could neither test those, nor the oldstable ones we are still building (and will probably soon be dropped, if not for anything else, to save disk space). We enjoyed a lot of socialization time. An important highlight of the conference for me was that we reconnected with a long-lost DD, Eduardo Tr pani, and got him interested in getting involved in the project again! This second day, another local Uruguayan, Mauricio, joined us together with his girlfriend, Alicia, and Felipe came again to hang out with us. Sadly, we didn t get photographic evidence of them (nor the permission to post it). The nice house Santiago got for us was very well equipped for a miniDebConf. There were a couple of rounds of pool played by those that enjoyed it (I was very happy just to stand around, take some photos and enjoy the atmosphere and the conversation). Today (Saturday) is the last full-house day of miniDebConf; tomorrow we will be leaving the house by noon. It was also a very productive day! We had a long, important conversation about an important discussion that we are about to present on debian-vote@lists.debian.org. It has been a great couple of days! Sadly, it s coming to an end But this at least gives me the opportunity (and moral obligation!) to write a long blog post. And to thank Santiago for organizing this, and Debian, for sponsoring our trip, stay, foods and healthy enjoyment!

23 October 2023

Russ Allbery: Review: Going Postal

Review: Going Postal, by Terry Pratchett
Series: Discworld #33
Publisher: Harper
Copyright: October 2004
Printing: November 2014
ISBN: 0-06-233497-2
Format: Mass market
Pages: 471
Going Postal is the 33rd Discworld novel. You could probably start here if you wanted to; there are relatively few references to previous books, and the primary connection (to Feet of Clay) is fully re-explained. I suspect that's why Going Postal garnered another round of award nominations. There are arguable spoilers for Feet of Clay, however. Moist von Lipwig is a con artist. Under a wide variety of names, he's swindled and forged his way around the Disc, always confident that he can run away from or talk his way out of any trouble. As Going Postal begins, however, it appears his luck has run out. He's about to be hanged. Much to his surprise, he wakes up after his carefully performed hanging in Lord Vetinari's office, where he's offered a choice. He can either take over the Ankh-Morpork post office, or he can die. Moist, of course, immediately agrees to run the post office, and then leaves town at the earliest opportunity, only to be carried back into Vetinari's office by a relentlessly persistent golem named Mr. Pump. He apparently has a parole officer. The clacks, Discworld's telegraph system first seen in The Fifth Elephant, has taken over most communications. The city is now dotted with towers, and the Grand Trunk can take them at unprecedented speed to even far-distant cities like Genua. The post office, meanwhile, is essentially defunct, as Moist quickly discovers. There are two remaining employees, the highly eccentric Junior Postman Groat who is still Junior because no postmaster has lasted long enough to promote him, and the disturbingly intense Apprentice Postman Stanley, who collects pins. Other than them, the contents of the massive post office headquarters are a disturbing mail sorting machine designed by Bloody Stupid Johnson that is not picky about which dimension or timeline the sorted mail comes from, and undelivered mail. A lot of undelivered mail. Enough undelivered mail that there may be magical consequences. All Moist has to do is get the postal system running again. Somehow. And not die in mysterious accidents like the previous five postmasters. Going Postal is a con artist story, but it's also a startup and capitalism story. Vetinari is, as always, solving a specific problem in his inimitable indirect way. The clacks were created by engineers obsessed with machinery and encodings and maintenance, but it's been acquired by... well, let's say private equity, because that's who they are, although Discworld doesn't have that term. They immediately did what private equity always did: cut out everything that didn't extract profit, without regard for either the service or the employees. Since the clacks are an effective monopoly and the new owners are ruthless about eliminating any possible competition, there isn't much to stop them. Vetinari's chosen tool is Moist. There are some parts of this setup that I love and one part that I'm grumbly about. A lot of the fun of this book is seeing Moist pulled into the mission of resurrecting the post office despite himself. He starts out trying to wriggle out of his assigned task, but, after a few early successes and a supernatural encounter with the mail, he can't help but start to care. Reformed con men often make good protagonists because one can enjoy the charisma without disliking the ethics. Pratchett adds the delightfully sharp-witted and cynical Adora Belle Dearheart as a partial reader stand-in, which makes the process of Moist becoming worthy of his protagonist role even more fun. I think that a properly functioning postal service is one of the truly monumental achievements of human society and doesn't get nearly enough celebration (or support, or pay, or good working conditions). Give me a story about reviving a postal service by someone who appreciates the tradition and social role as much as Pratchett clearly does and I'm there. The only frustration is that Going Postal is focused more on an immediate plot, so we don't get to see the larger infrastructure recovery that is clearly needed. (Maybe in later books?) That leads to my grumble, though. Going Postal and specifically the takeover of the clacks is obviously inspired by corporate structures in the later Industrial Revolution, but this book was written in 2004, so it's also a book about private equity and startups. When Vetinari puts a con man in charge of the post office, he runs it like a startup: do lots of splashy things to draw attention, promise big and then promise even bigger, stumble across a revenue source that may or may not be sustainable, hire like mad, and hope it all works out. This makes for a great story in the same way that watching trapeze artists or tightrope walkers is entertaining. You know it's going to work because that's the sort of book you're reading, so you can enjoy the audacity and wonder how Moist will manage to stay ahead of his promises. But it is still a con game applied to a public service, and the part of me that loves the concept of the postal service couldn't stop feeling like this is part of the problem. The dilemma that Vetinari is solving is a bit too realistic, down to the requirement that the post office be self-funding and not depend on city funds and, well, this is repugnant to me. Public services aren't businesses. Societies spend money to build things that they need to maintain society, and postal service is just as much one of those things as roads are. The ability of anyone to send a letter to anyone else, no matter how rural the address is, provides infrastructure on which a lot of important societal structure is built. Pratchett made me care a great deal about Ankh-Morpork's post office (not hard to do), and now I want to see it rebuilt properly, on firm foundations, without splashy promises and without a requirement that it pay for itself. Which I realize is not the point of Discworld at all, but the concept of running a postal service like a startup hits maybe a bit too close to home. Apart from that grumble, this is a great book if you're in the mood for a reformed con man story. I thought the gold suit was a bit over the top, but I otherwise thought Moist's slow conversion to truly caring about his job was deeply satisfying. The descriptions of the clacks are full of askew Discworld parodies of computer networking and encoding that I enjoyed more than I thought I would. This is also the book that introduced the now-famous (among Pratchett fans at least) GNU instruction for the clacks, and I think that scene is the most emotionally moving bit of Pratchett outside of Night Watch. Going Postal is one of the better books in the Discworld series to this point (and I'm sadly getting near the end). If you have less strongly held opinions about management and funding models for public services, or at least are better at putting them aside when reading fantasy novels, you're likely to like it even more than I did. Recommended. Followed by Thud!. The thematic sequel is Making Money. Rating: 8 out of 10

22 October 2023

Aigars Mahinovs: Figuring out finances part 3

So now that I have something that looks very much like a budgeting setup going, I am going to .. delete it! Why? Well, at the end of the last part of this, the Firefly III instance was running on a tiny Debian server in a Docker container right next to another Docker container that is running the main user of this server - a Home Assistant instance that has been managing my home for several years already. So why change that? See, there is one bit of knowledge that is very crucial to your Home Assistant experience, which is not really emphasised enough in the Home Assistant documentation. In fact back when I was getting into the Home Assistant both the main documentation and basically all the guides around were just coming off the hype of Docker disrupting everything and that is a big reason why everyone suggested to install and use Home Assistant as a Docker container on top of any kind of stable OS. In fact I used to run it for years on my TerraMaster NAS, just so that I don't have a separate home server running 24/7 at home and just have everything inside the very compact NAS case. So here is the thing you NEED to know - Home Assistant Container is DEMO version of Home Assistant! If you want to have a full Home Assistant experience and use the knowledge of the huge community around the HA space, you have to use the Home Assistant OS. Ideally on dedicated hardware. Ideally on HA Green box, but any tiny PC would also work great. Raspberry Pi 4+ is common, but quite weak as the network size grows and especially the SD card for storage gets old very fast. Get a real small x86 PC with at least 4Gb RAM and a NVME SSD (eMMC is fine too). You want to have an Ethernet port and a few free USB ports. I would also suggest immediately getting HA SkyConnect adapter that can do Zigbee networking and will do Matter soon (tm). I am making do with a SonOff Zigbee gateway, but it is quite hacky to get working and your whole Zigbee communication breaks down if the WiFi goes down - suboptimal. So I took a backup of the Home Assistant instance using it's build-in tools. I took an export of my fully configured Firefly III instance and proceeded to wipe the drive of the NUC. That was not a smart idea. :D On the Home Assitant side I was really frustrated by the documentation that was really focused on users that are (likely) using Windows and are using an SD card in something like Raspberry Pi to get Home Assistant OS running. It recommended downloading Etcher to write the image to the boot medium. That is a really weird piece of software that managed to actually crash consistently when I was trying to run it from Debian Live or Ubuntu Live on my NUC. It took me way too long to give up and try something much simpler - dd. xzcat haos_generic-x86-64-11.0.img.xz dd of=/dev/mmcblk0 bs=1M That just worked, prefectly and really fast. If you want to use a GUI in a live environment, then just using the gnome-disk-utility ("Disks" in Gnome menu) and using the "Restore Disk Image ..." on a partition would work just as well. It even supports decompressing the XZ images directly while writing. But that image is small, will it not have a ton of unused disk space behind the fixed install partition? Yes, it will ... until first boot. The HA OS takes over the empty space after its install partition on the first boot-up and just grows its main partition to take up all the remaining space. Smart. After first boot is completed, the first boot wizard can be accessed via your web browser and one of the prominent buttons there is restoring from backup. So you just give it the backup file and wait. Sadly the restore does not actually give any kind of progress, so your only way to figure out when it is done is opening the same web adress in another browser tab and refresh periodically - after restoring from backup it just boots into the same config at it had before - all the settings, all the devices, all the history is preserved. Even authentification tokens are preserved so if yu had a Home Assitant Mobile installed on your phone (both for remote access and to send location info and phone state, like charging, to HA to trigger automations) then it will just suddenly start working again without further actions needed from your side. That is an almost perfect backup/restore experience. The first thing you get for using the OS version of HA is easy automatic update that also automatically takes a backup before upgrade, so if anything breaks you can roll back with one click. There is also a command-line tool that allows to upgrade, but also downgrade ha-core and other modules. I had to use it today as HA version 23.10.4 actually broke support for the Sonoff bridge that I am using to control Zigbee devices, which are like 90% of all smart devices in my home. Really helpful stuff, but not a must have. What is a must have and that you can (really) only get with Home Assistant Operating System are Addons. Some addons are just normal servers you can run alongside HA on the same HA OS server, like MariaDB or Plex or a file server. That is not the most important bit, but even there the software comes pre-configured to use in a home server configuration and has a very simple config UI to pre-configure key settings, like users, passwords and database accesses for MariaDB - you can litereally in a few clicks and few strings make serveral users each with its own access to its own database. Couple more clicks and the DB is running and will be kept restarted in case of failures. But the real gems in the Home Assistant Addon Store are modules that extend Home Assitant core functionality in way that would be really hard or near impossible to configure in Home Assitant Container manually, especially because no documentation has ever existed for such manual config - everyone just tells you to install the addon from HA Addon store or from HACS. Or you can read the addon metadata in various repos and figure out what containers it actually runs with what settings and configs and what hooks it puts into the HA Core to make them cooperate. And then do it all over again when a new version breaks everything 6 months later when you have already forgotten everything. In the Addons that show up immediately after installation are addons to install the new Matter server, a MariaDB and MQTT server (that other addons can use for data storage and message exchange), Z-Wave support and ESPHome integration and very handy File manager that includes editors to edit Home Assitant configs directly in brower and SSH/Terminal addon that boht allows SSH connection and also a web based terminal that gives access to the OS itself and also to a comand line interface, for example, to do package downgrades if needed or see detailed logs. And also there is where you can get the features that are the focus this year for HA developers - voice enablers. However that is only a beginning. Like in Debian you can add additional repositories to expand your list of available addons. Unlike Debian most of the amazing software that is available for Home Assistant is outside the main, official addon store. For now I have added the most popular addon repository - HACS (Home Assistant Community Store) and repository maintained by Alexbelgium. The first includes things like NodeRED (a workflow based automation programming UI), Tailscale/Wirescale for VPN servers, motionEye for CCTV control, Plex for home streaming. HACS also includes a lot of HA UI enhacement modules, like themes, custom UI control panels like Mushroom or mini-graph-card and integrations that provide more advanced functions, but also require more knowledge to use, like Local Tuya - that is harder to set up, but allows fully local control of (normally) cloud-based devices. And it has AppDaemon - basically a Python based automation framework where you put in Python scrips that get run in a special environment where they get fed events from Home Assistant and can trigger back events that can control everything HA can and also do anything Python can do. This I will need to explore later. And the repository by Alex includes the thing that is actually the focus of this blog post (I know :D) - Firefly III addon and Firefly Importer addon that you can then add to your Home Assistant OS with a few clicks. It also has all kinds of addons for NAS management, photo/video server, book servers and Portainer that lets us setup and run any Docker container inside the HA OS structure. HA OS will detect this and warn you about unsupported processes running on your HA OS instance (nice security feature!), but you can just dismiss that. This will be very helpful very soon. This whole environment of OS and containers and apps really made me think - what was missing in Debian that made the talented developers behind all of that to spend the immense time and effor to setup a completely new OS and app infrastructure and develop a completel paraller developer community for Home Assistant apps, interfaces and configurations. Is there anything that can still be done to make HA community and the general open source and Debian community closer together? HA devs are not doing anything wrong: they are using the best open source can provide, they bring it to people whould could not install and use it otherwise, they are contributing fixes and improvements as well. But there must be some way to do this better, together. So I installed MariaDB, create a user and database for Firefly. I installed Firefly III and configured it to use the MariaDB with the web config UI. When I went into the Firefly III web UI I was confronted with the normal wizard to setup a new instance. And no reference to any backup restore. Hmm, ok. Maybe that goes via the Importer? So I make an access token again, configured the Importer to use that, configured the Nordlinger bank connection settings. Then I tried to import the export that I downloaded from Firefly III before. The importer did not auto-recognose the format. Turns out it is just a list of transactions ... It can only be barely useful if you first manually create all the asset accounts with the same names as before and even then you'll again have to deal with resolving the problem of transfers showing up twice. And all of your categories (that have not been used yet) are gone, your automation rules and bills are gone, your budgets and piggy banks are gone. Boooo. It will be easier for me to recreate my account data from bank exports again than to resolve data in that transaction export. Turns out that Firefly III documenation explicitly recommends making a mysqldump of your own and not rely on anything in the app itself for backup purposes. Kind of sad this was not mentioned in the export page that sure looked a lot like a backup :D After doing all that work all over again I needed to make something new not to feel like I wasted days of work for no real gain. So I started solving a problem I had for a while already - how do I add cash transations to the system when I am out of the house with just my phone in the hand? So far my workaround has been just sending myself messages in WhatsApp with the amount and description of any cash expenses. Two solutions are possible: app and bot. There are actually multiple Android-based phone apps that work with Firefly III API to do full financial management from the phone. However, after trying it out, that is not what I will be using most of the time. First of all this requires your Firefly III instance to be accessible from the Internet. Either via direct API access using some port forwarding and secured with HTTPS and good access tokens, or via a VPN server redirect that is installed on both HA and your phone. Tailscale was really easy to get working. But the power has its drawbacks - adding a new cash transaction requires opening the app, choosing new transaction view, entering descriptio, amount, choosing "Cash" as source account and optionally choosing destination expense account, choosing category and budget and then submitting the form to the server. Sadly none of that really works if you have no Internet or bad Internet at the place where you are using cash. And it's just too many steps. Annoying. An easier alternative is setting up a Telegram bot - it is running in a custom Docker container right next to your Firefly (via Portainer) and you talk to it via a custom Telegram chat channel that you create very easily and quickly. And then you can just tell it "Coffee 5" and it will create a transaction from the (default) cash account in 5 amount with description "Coffee". This part also works if you are offline at the moment - the bot will receive the message once you get back online. You can use Telegram bot menu system to edit the transaction to add categories or expense accounts, but this part only work if you are online. And the Firefly instance does not have to be online at all. Really nifty. So next week I will need to write up all the regular payments as bills in Firefly (again) and then I can start writing a Python script to predict my (financial) future!

17 October 2023

Russ Allbery: Review: A Hat Full of Sky

Review: A Hat Full of Sky, by Terry Pratchett
Series: Discworld #32
Publisher: HarperTrophy
Copyright: 2004
Printing: 2005
ISBN: 0-06-058662-1
Format: Mass market
Pages: 407
A Hat Full of Sky is the 32nd Discworld novel and the second Tiffany Aching young adult novel. You should not start here, but you could start with The Wee Free Men. As with that book, some parts of the story carry more weight if you are already familiar with Granny Weatherwax. Tiffany is a witch, but she needs to be trained. This is normally done by apprenticeship, and in Tiffany's case it seemed wise to give her exposure to more types of witching. Thus, Tiffany, complete with new boots and a going-away present from the still-somewhat-annoying Roland, is off on an apprenticeship to the sensible Miss Level. (The new boots feel wrong and get swapped out for her concealed old boots at the first opportunity.) Unbeknownst to Tiffany, her precocious experiments with leaving her body as a convenient substitute for a mirror have attracted something very bad, something none of the witches are expecting. The Nac Mac Feegle know a hiver as soon as they feel it, but they have a new kelda now, and she's not sure she wants them racing off after their old kelda. Terry Pratchett is very good at a lot of things, but I don't think villains are one of his strengths. He manages an occasional memorable one (the Auditors, for example, at least before the whole chocolate thing), but I find most of them a bit boring. The hiver is one of the boring ones. It serves mostly as a concretized metaphor about the temptations of magical power, but those temptations felt so unlike the tendencies of Tiffany's personality that I didn't think the metaphor worked in the story. The interesting heart of this book to me is the conflict between Tiffany's impatience with nonsense and Miss Level's arguably excessive willingness to help everyone regardless of how demanding they get. There's something deeper in here about female socialization and how that interacts with Pratchett's conception of witches that got me thinking, although I don't think Pratchett landed the point with full force. Miss Level is clearly a good witch to her village and seems comfortable with how she lives her life, so perhaps they're not taking advantage of her, but she thoroughly slots herself into the helper role. If Tiffany attempted the same role, people would be taking advantage of her, because the role doesn't fit her. And yet, there's a lesson here she needs to learn about seeing other people as people, even if it wouldn't be healthy for her to move all the way to Miss Level's mindset. Tiffany is a precocious kid who is used to being underestimated, and who has reacted by becoming independent and somewhat judgmental. She's also had a taste of real magical power, which creates a risk of her getting too far into her own head. Miss Level is a fount of empathy and understanding for the normal people around her, which Tiffany resists and needed to learn. I think Granny Weatherwax is too much like Tiffany to teach her that. She also has no patience for fools, but she's older and wiser and knows Tiffany needs a push in that direction. Miss Level isn't a destination, but more of a counterbalance. That emotional journey, a conclusion that again focuses on the role of witches in questions of life and death, and Tiffany's fascinatingly spiky mutual respect with Granny Weatherwax were the best parts of this book for me. The middle section with the hiver was rather tedious and forgettable, and the Nac Mac Feegle were entertaining but not more than that. It felt like the story went in a few different directions and only some of them worked, in part because the villain intended to tie those pieces together was more of a force of nature than a piece of Tiffany's emotional puzzle. If the hiver had resonated with the darker parts of Tiffany's natural personality, the plot would have worked better. Pratchett was gesturing in that direction, but he never convinced me it was consistent with what we'd already seen of her. Like a lot of the Discworld novels, the good moments in A Hat Full of Sky are astonishing, but the plot is somewhat forgettable. It's still solidly entertaining, though, and if you enjoyed The Wee Free Men, I think this is slightly better. Followed by Going Postal in publication order. The next Tiffany Aching novel is Wintersmith. Rating: 8 out of 10

15 October 2023

Aigars Mahinovs: Figuring out finances part 2

A week ago I started to migrate my financial planning from a closed source system to a new system based on an open source, self-hosted solution. Main candidate is Firefly III - a relatively simple financial planner with a rather rich feature set and a solid user base and developer support. Starting it up with a Docker-Compose file was quite easy, following the official documentation. The same Compose file also managed the MySQL database, the importer app and a cron container for regular imports. The separate importer app allows both imports from CSV files and also from external bank account connection services. For whatever weird reason both of those services support exporting data from my regular bank accounts, but not from my credit cards for exactly the same bank. So for those cards I would need to periodically download CVS transaction exports and feed them into the importer. The combination of the cron container and the importer app allows for both of these functions. To do this you first do the import via the Web UI first and configure all the mappings - configure file and date formats, map account names in the bank output to account names that you added in Firefly, configure what otehr columns mean and what is the mapping of other values, like expense and income accounts. At the end of the process you can download a json file representing all the settings of the import you just did. Putting such a json config file in the input folder of the cron container and telling it to do an import (weekly, for example) would do such import periodically. Putting a credit card CSV file along with a config file for credit card import would also auto-import that. So far so good. However, when I tried importing exports from MoneyWiz or even when I tried re-creating account history directly from the bank data, I hit a very annoying problem. Transfers. Thing is detecting transfers between the real (asset) accounts that you are managing is really essential. For one those are not real expenses and incomes, so failing to mark them as transfers would show weird income/expense numbers. But if you do detect them as transfers and correctly map the destination account, but fail to match the transations between another you get a double-transaction. This becomes really hard when transactions happen on different dates and have different descriptions. So you get both a +1000 transaction "Credit card reset" on 14.09.2023 in the credit card account and a -1000$ transaction "Repayment of own credit card" on 24.09.2023 in the main account of the bank. Matching them to recognise that it is just one transfer and not two is ... non-trivial. The best solution I could come up with so far is to always map the "Opposing account name" for such transfers to a virtual "Transfers" asset account. That has the benefit of being actually able to correctly represent transfers that take several days to move between accounts and showing you how much money is still in transit at any point in time. So after I figured this out, finally the account balances started matching up with reality. Setting up spending categories required another change - the author of the Firefly III does not like complexity so it does not support nesting categories. Categories can only be in a single, flat list. It is suggested that if you do need to track multiple categories and also their combination, then category groups might help you. I will survive for now by simplifying the categories that I do actually use. Might actually make the reports more usable. A new feature for me will be the ability to use automation rules to assign transactions to categories based on their contents, like expense accounts of keywords in descriptions. Setting up regular bills (with matching rules to assign incoming transactions to those bills) is another feature that is very important to me. The feature itself works just fine in Firefly III, but it has two restrictions that the author of the software does not actually want to be changed. For one it can not be used to track prodictable incomes (like salary), presumably because it is only there for bills (subscriptions in v3 UI). And for other, there is nothing in the base software that actually uses the data from bills to look forward. The author does not like to try to predict the future. Which for me is basically the one of two reasons to use this kind of software at all. I want to know - given all manually entered regular and 100% predictable expenses and incomes, what will be the state of my accounts at particular date? It seems that to get at that I will need to write my own oracle script using the rich Firefly III API. The last question I had for this week was - how will I enter a new cash puchace of potatoes on a farmers market into the system that sits in a private server in my home network? Turns out there are actually multiple Android apps for Firefly III that can be easily used for this using a robust OAuth or shared token authentification. Except the web UI needs to be exposed online for this to work (and ideally protected by HTTPS). Other users have also created Telegram bots that allow using chat messages to create transaction entries. This is a bit harder to use and more narrow scope, but should be easier to setup. Will have to try both apporaches. But before I really get going on this, I need to fix another thing that I have been postponing for a while: I will need to migrate my Home Assistant installation from a Docker container installation to a Home Assistant OS installation so that I can install Addons, including Firefly III, MySQL and Portainer to have a bit more organised and less hand-knitted home server setup. Let's see how that goes next week :D

14 October 2023

Ravi Dwivedi: Kochi - Wayanad Trip in August-September 2023

A trip full of hitchhiking, beautiful places and welcoming locals.

Day 1: Arrival in Kochi Kochi is a city in the state of Kerala, India. This year s DebConf was to be held in Kochi from 3rd September to 17th of September, which I was planning to attend. My friend Suresh, who was planning to join, told me that 29th August 2023 will be Onam, a major festival of the state of Kerala. So, we planned a Kerala trip before the DebConf. We booked early morning flights for Kochi from Delhi and reached Kochi on 28th August. We had booked a hostel named Zostel in Ernakulam. During check-in, they asked me to fill a form which required signing in using a Google account. I told them I don t have a Google account and I don t want to create one either. The people at the front desk seemed receptive, so I went ahead with telling them the problems of such a sign-in being mandatory for check-in. Anyways, they only took a photo of my passport and let me check-in without a Google account. We stayed in a ten room dormitory, which allowed travellers of any gender. The dormitory room was air-conditioned, spacious, clean and beds were also comfortable. There were two bathrooms in the dormitory and they were clean. Plus, there was a separate dormitory room in the hostel exclusive for females. I noticed that that Zostel was not added in the OpenStreetMap and so, I added it :) . The hostel had a small canteen for tea and snacks, a common sitting area outside the dormitories, which had beds too. There was a separate silent room, suitable for people who want to work.
Dormitory room in Zostel Ernakulam, Kochi.
Beds in Zostel Ernakulam, Kochi.
We had lunch at a nearby restaurant and it was hard to find anything vegetarian for me. I bought some freshly made banana chips from the street and they were tasty. As far as I remember, I had a big glass of pineapple juice for lunch. Then I went to the Broadway market and bought some cardamom and cinnamon for home. I also went to a nearby supermarket and bought Matta brown rice for home. Then, I looked for a courier shop to send the things home but all of them were closed due to Onam festival. After returning to the Zostel, I overslept till 9 PM and in the meanwhile, Suresh planned with Saidut and Shwetank (who met us during our stay in Zostel) to go to a place in Fort Kochi for dinner. I suspected I will be disappointed by lack of vegetarian options as they were planning to have fish. I already had a restaurant in mind - Brindhavan restaurant (suggested by Anupa), which was a pure vegetarian restaurant. To reach there, I got off at Palarivattom metro station and started looking for an auto-rickshaw to get to the restaurant. I didn t get any for more than 5 minutes. Since that restaurant was not added to the OpenStreetMap, I didn t even know how far that was and which direction to go to. Then, I saw a Zomato delivery person on a motorcycle and asked him where the restaurant was. It was already 10 PM and the restaurant closes at 10:30. So, I asked him whether he can drop me off. He agreed and dropped me off at that restaurant. It was 4-5 km from that metro station. I tipped him and expressed my gratefulness for the help. He refused to take the tip, but I insisted and he accepted. I entered the restaurant and it was coming to a close, so many items were not available. I ordered some Kadhai Paneer (only item left) with naan. It tasted fine. Since the next day was Thiruvonam, I asked the restaurant about the Sadya thali menu and prices for the next day. I planned to eat Sadya thali at that restaurant, but my plans got changed later.
Onam sadya menu from Brindhavan restaurant.

Day 2: Onam celebrations Next day, on 29th of August 2023, we had plan to leave for Wayanad. Wayanad is a hill station in Kerala and a famous tourist spot. Praveen suggested to visit Munnar as it is far closer to Kochi than Wayanad (80 km vs 250 km). But I had already visited Munnar in my previous trips, so we chose Wayanad. We had a train late night from Ernakulam Junction (at 23:30 hours) to Kozhikode, which is the nearest railway station from Wayanad. So, we checked out in the morning as we had plans to roam around in Kochi before taking the train. Zostel was celebrating Onam on that day. To opt-in, we had to pay 400 rupees, which included a Sadya Thali and a mundu. Me and Suresh paid the amount and opted in for the celebrations. Sadya thali had Rice, Sambhar, Rasam, Avial, Banana Chips, Pineapple Pachadi, Pappadam, many types of pickels and chutneys, Pal Ada Payasam and Coconut jaggery Pasam. And, there was water too :). Those payasams were really great and I had one more round of them. Later, I had a lot of variety of payasams during the DebConf.
Sadya lined up for serving
Sadya thali served on banana leaf.
So, we hung out in the common room and put our luggage there. We played UNO and had conversations with other travellers in the hostel. I had a fun time there and I still think it is one of the best hostel experiences I had. We made good friends with Saiduth (Telangana) and Shwetank (Uttarakhand). They were already aware about the software like debian, and we had some detailed conversations about the Free Software movement. I remember explaining the difference between the terms Open Source and Free Software . I also told them about the Streetcomplete app, a beginner friendly app to edit OpenStreetMap. We had dinner at a place nearby (named Palaraam), but again, the vegetarian options were very limited! After dinner, we came back to the Zostel and me and Suresh left for Ernakulam Junction to catch our train Maveli Express (16604).

Day 3: Going to Wayanad Maveli Express was scheduled to reach Kozhikode at 03:25 (morning). I had set alarms from 03:00 to 03:30, with the gap of 10 minutes. Every time I woke up, I turned off the alarm. Then I woke up and saw train reaching the Kozhikode station and woke up Suresh for deboarding. But then I noticed that the train is actually leaving the station, not arriving! This means we missed our stop. Now we looked at the next stops and whether we can deboard there. I was very sleepy and wanted to take a retiring room at some station before continuing our journey to Wayanad. The next stop was Quilandi and we checked online that it didn t have a retiring room. So, we skipped this stop. We got off at the next stop named Vadakara and found out no retiring room was available. So, we asked about information regarding bus for Wayanad and they said that there is a bus to Wayanad around 07:00 hours from bus station which was a few kilometres from the railway station. We took a bus for Kalpetta (in Wayanad) at around 07:00. The destination of the buses were written in Malayalam, which we could not read. Once again, the locals helped us to get on to the bus to Kalpetta. Vadakara is not a big city and it can be hard to find people who know good Hindi or English, unlike Kochi. Despite language issues, I had no problem there in navigation, thanks to locals. I mostly spent time sleeping during the bus journey. A few hours later, the bus dropped us at Kalpetta. We had a booking at a hostel in Rippon village. It was 16 km from Kalpetta. On the way, we were treated with beautiful views of nature, which was present everywhere in Wayanad. The place was covered with tea gardens and our eyes were treated with beautiful scenery at every corner.
We were treated with such views during the Wayanad trip.
Rippon village was a very quiet place and I liked the calm atmosphere. This place is blessed by nature and has stunning scenery. I found English was more common than Hindi in Wayanad. Locals were very nice and helped me, even if they didn t know my language.
A road in Rippon.
After catching some sleep at the hostel, I went out in the afternoon. I hitchhiked to reach the main road from the hostel. I bought more spices from a nearby shop and realized that I should have waited for my visit to Wayanad to buy cardamom, which I already bought from Kochi. Then, I was looking for post office to send spices home. The people at the spices shop told me that the nearby Rippon post office was closed by that time, but the post office at Meppadi was open, which was 5 km from there. I went to Meppadi and saw the post office closes at 15:00, but I reached five minutes late. My packing was not very good and they asked me to pack it tighter. There was a shop near the post office and the people there gave me a cardboard and tapes, and helped pack my stuff for the post. By the time I went to the post office again, it was 15:30. But they accepted my parcel for post.

Day 4: Kanthanpara Falls, Zostel Wayanad and Karapuzha Dam Kanthanpara waterfalls were 2 km from the hostel. I hitchhiked to the place from the hostel on a scooty. Entry ticket was worth Rs 40. There were good views inside and nothing much to see except the waterfalls.
Entry to Kanthanpara Falls.
Kanthanpara Falls.
We had a booking at Zostel Wayanad for this day and so we shifted there. Again, as with their Ernakulam branch, they asked me to fill a form which required signing in using Google, but when I said I don t have a Google account they checked me in without that. There were tea gardens inside the Zostel boundaries and the property was beautiful.
A view of Zostel Wayanad.
A map of Wayanad showing tourist places.
A view from inside the Zostel Wayanad property.
Later in the evening, I went to Karapuzha Dam. I witnessed a beautiful sunset during the journey. Karapuzha dam had many activites, like ziplining, and was nice to roam around. Chembra Peak is near to the Zostel Wayanad. So, I was planning to trek to the heart shaped lake. It was suggested by Praveen and looking online, this trek seemed worth doing. There was an issue however. The charges for trek were Rs 1770 for upto five people. So, if I go alone I will have to spend Rs 1770 for the trek. If I go with another person, we split Rs 1770 into two, and so on. The optimal way to do it is to go in a group of five (you included :D). I asked front desk at Zostel if they can connect me with people going to Chembra peak the next day, and they told me about a group of four people planning to go to Chembra peak the next day. I got lucky! All four of them were from Kerala and worked in Qatar.

Day 5: Chembra peak trek The date was 1st September 2023. I woke up early (05:30 in the morning) for the Chembra peak trek. I had bought hiking shoes especially for trekking, which turned out to be a very good idea. The ticket counter opens at 07:00. The group of four with which I planned to trek met me around 06:00 in the Zostel. We went to the ticket counter around 06:30. We had breakfast at shops selling Maggi noodles and bread omlette near the ticket counter. It was a hot day and the trek was difficult for an inexperienced person like me. The scenery was green and beautiful throughout.
Terrain during trekking towards the Chembra peak.
Heart-shaped lake at the Chembra peak.
Me at the heart-shaped lake.
Views from the top of the Chembra peak.
View of another peak from the heart-shaped lake.
While returning from the trek, I found out a shop selling bamboo rice, which I bought and will make bamboo rice payasam out of it at home (I have some coconut milk from Kerala too ;)). We returned to Zostel in the afternoon. I had muscle pain after the trek and it has still not completely disappeared. At night, we took a bus from Kalpetta to Kozhikode in order to return to Kochi.

Day 6: Return to Kochi At midnight of 2nd of September, we reached Kozhikode bus stand. Then we roamed around for something to eat. I didn t find anything vegetarian to eat. No surprises there! Then we went to Kozhikode railway station and looked for retiring rooms, but no luck there. We waited at the station and took the next train to Kochi at 03:30 and reached Ernakulam Junction at 07:30 (half hours before train s scheduled time!). From there, we went to Zostel Fort Kochi and stayed one night there and checked out next morning.

Day 7: Roaming around in Fort Kochi On 3rd of September, we roamed around in Fort Kochi. We visited the usual places - St Francis Church, Dutch Palace, Jew Town, Pardesi Synagogue. I also visited some homestays and the owners were very happy to show their place even when I made it clear that I was not looking for a stay. In the evening, we went to Kakkanad to attend DebConf. The story continues in my DebConf23 blog post.

9 October 2023

Aigars Mahinovs: Figuring out finances part 1

I have been managing my finances and getting an overview of where I am financially and where I am going month-to-month for a few years already. That means that I already have a method of doing my finances and a method of thinking about them. So far this has been supported by a commerical tool called MoneyWiz. I was happy to pay them for the ability to have a solid product and be able to easily access my finances from my phone (to enter cash transactions directly in the field) and sync data across multiple devices with their cloud offering. However, MoneyWiz development stalled. So much so that they stopped updating their Android app and cancelled web version plans to just focus on a iPhone and MacOS clients. And even there the development did not seem very successful over the last year. So I am cancelling their subscription, extracting all my data (as CSV) and looking for alternatives. So let's start with the basics - what I want from finace software?
  • Open source and self-hosted
  • Accounts to represent real accounts in my bank, credit cards, cash account
  • Transactions to represent money moving in/out/between accounts
  • Tree of categories for transactions to categorise spending
  • Category reports on arbitrary time periods
  • Ability to see the expected balance on each account at any historic point in time from given transactions
  • Ability to do a corrective transaction - hmm, I don't know what happened, but right now I have X in cash
  • Adding past transactions before a corrective transaction should not change balance after correction
  • Ability to automatically import transactions from my bank
  • Ability to plan future, recurring transactions
  • Incoming transactions from bank import should be auto-matched with planned recuring transactions
  • Ability to see missed planned recurring transactions
  • Ability to see value deviations in planned recuring transactions
  • Ability to manually match a transaction with a planned recurring transaction for that month or skip a month
  • Ability to see a projected balance on my accounts at any future date given planned transactions
  • Credit card fully refilling its current (negative) balance to zero (from a given account) should be a plannable transaction
  • Accessible from multiple devices, like Linux, Android, iPhone, MacOS. Most likely that means web-based
  • Achiving accounts without loosing their past transactions with active accounts
  • API to access data, so that I could wire that up to a Home Assistant dashboard
I don't care about currencies (thanks Euro!), localization, taxes and double-entry bookkeeping. I don't have much use for tags, budgets or savings targets. It could be useful to be able to track stock investment values (but I do have a pretty good from the bank that does that anyway). And tracking loans (incoming and outgoing) could be a worthwhile thing as well, but that can also be simulated with separate accounts for each loan. Tracking refundable expenses (like medical expenses that insurances should refund later on) would also be nice. Could also be simulated by having a separate expense category (Medical - refundable) and putting the refund as a negative expense into that category. One weird feature that I really like is having transfer transactions that have arbitrary incoming and outgoing dates. For some transactions it is a simple transfer delay - money leaves the account A on Friday and only appears in the destination account B on Monday morning. However, this can also happen the other way around - a few credit cards I have seen work in a way that on 14th of each month the balance of the credit card gets reset to zero and then a week later the negative balance that was on that credit card gets taken from the base account that the card is bound to. So the money leaves the base account a week later than it arrives in the credit card account. Financial systems really are magic sometimes :D Ideally that should be represented by a single transaction with two dates - one date when money leaves one account and another date when it arrives in a different account, so that balances on both accounts in the days between would be correct (as in matching what the bank says they were on those dates). And it should automatch such transaction from bank data imports. And predict its value from the negative balance of the credit card. And a pony! But in worst case this could be simulated using a separate "In transfer" account. When I am thinking about my finances, the key metric for me is - how much will I spend this month - both in fixed spending (rent, Internet and phone bills, health insurance, subscriptions, ...) and in variable spending (groceries, eating out, clothing, ...). In theory that should be trivial, but in practice a simple calendar month view will fail because a bunch of fixed expenses occur at the month end/start boundary and can happen either at the end of previous month or at start on next month, depending on how the weekends happen to fall and how the systems of various providers decided to do direct debits this month. So 1st of month is not a good snapshoting time. Any date between 6th (last possible days for monthly bills) and 20th (earliest possible salary day) would be a better cut off point. Looking at the balance of my main account at that time point lets me know what was the difference between income and expenses for the past month. And that is the number I want the financial app to be able to predict as soon as possible. And additional complication is the way credit cards work here. The expenses booked to a credit card after 14th of September will only appear in the main account on 20th of October and thus will actually only count towards the expenses of month of October. That makes it very confusing to see a larger overspend in the main account on 6th of October and then use the categories to try to figure out where the money went, because all the spending that happened on the credit card after the 14th of September should not really be looked at in that context, but spending on the main account or cash account should be counted all the way to 6th of October. :D I am not optimistic that any finance software would able to doea with this mess correctly. So far I am tending to consider Firefly III which has most of what I need and looks great and well maintained, but has the tiny drawback of being written in PHP, so that basically excludes me from ability to contribute anything beyond ideas and feedback. :D I already did a quick test of Firefly via local Docker containers, so the next step would be to try to set it up as a production instance (using the same always-on mini PC that is running my Home Assistant instance), get the database up and working, get the Firefly III itself up and running, get account importer working for bank connection, import historical transactions from there, setup my categories and recurring transactions, import past data dumped out of MoneyWiz and see if this works well and gives me the insights I need. If you know a better solution, please let me know on any social or email channel!

18 September 2023

Bits from Debian: DebConf23 closes in Kochi and DebConf24 announced

DebConf23 group photo - click to enlarge On Sunday 17 September 2023, the annual Debian Developers and Contributors Conference came to a close. Over 474 attendees representing 35 countries from around the world came together for a combined 89 events made up of Talks, Discussons, Birds of a Feather (BoF) gatherings, workshops, and activities in support of furthering our distribution, learning from our mentors and peers, building our community, and having a bit of fun. The conference was preceded by the annual DebCamp hacking session held September 3d through September 9th where Debian Developers and Contributors convened to focus on their Individual Debian related projects or work in team sprints geared toward in-person collaboration in developing Debian. In particular this year Sprints took place to advance development in Mobian/Debian, Reproducible Builds, and Python in Debian. This year also featured a BootCamp that was held for newcomers staged by a team of dedicated mentors who shared hands-on experience in Debian and offered a deeper understanding of how to work in and contribute to the community. The actual Debian Developers Conference started on Sunday 10 September 2023. In addition to the traditional 'Bits from the DPL' talk, the continuous key-signing party, lightning talks and the announcement of next year's DebConf4, there were several update sessions shared by internal projects and teams. Many of the hosted discussion sessions were presented by our technical teams who highlighted the work and focus of the Long Term Support (LTS), Android tools, Debian Derivatives, Debian Installer, Debian Image, and the Debian Science teams. The Python, Perl, and Ruby programming language teams also shared updates on their work and efforts. Two of the larger local Debian communities, Debian Brasil and Debian India shared how their respective collaborations in Debian moved the project forward and how they attracted new members and opportunities both in Debian, F/OSS, and the sciences with their HowTos of demonstrated community engagement. The schedule was updated each day with planned and ad-hoc activities introduced by attendees over the course of the conference. Several activities that were unable to be held in past years due to the Global COVID-19 Pandemic were celebrated as they returned to the conference's schedule: a job fair, the open-mic and poetry night, the traditional Cheese and Wine party, the group photos and the Day Trips. For those who were not able to attend, most of the talks and sessions were videoed for live room streams with the recorded videos to be made available later through the Debian meetings archive website. Almost all of the sessions facilitated remote participation via IRC messaging apps or online collaborative text documents which allowed remote attendees to 'be in the room' to ask questions or share comments with the speaker or assembled audience. DebConf23 saw over 4.3 TiB of data streamed, 55 hours of scheduled talks, 23 network access points, 11 network switches, 75 kb of equipment imported, 400 meters of gaffer tape used, 1,463 viewed streaming hours, 461 T-shirts, 35 country Geoip viewers, 5 day trips, and an average of 169 meals planned per day. All of these events, activies, conversations, and streams coupled with our love, interest, and participation in Debian annd F/OSS certainly made this conference an overall success both here in Kochi, India and On-line around the world. The DebConf23 website will remain active for archival purposes and will continue to offer links to the presentations and videos of talks and events. Next year, DebConf24 will be held in Haifa, Israel. As tradition follows before the next DebConf the local organizers in Israel will start the conference activites with DebCamp with particular focus on individual and team work towards improving the distribution. DebConf is committed to a safe and welcome environment for all participants. See the web page about the Code of Conduct in DebConf23 website for more details on this. Debian thanks the commitment of numerous sponsors to support DebConf23, particularly our Platinum Sponsors: Infomaniak, Proxmox, and Siemens. We also wish to thank our Video and Infrastructure teams, the DebConf23 and DebConf commitiees, our host nation of India, and each and every person who helped contribute to this event and to Debian overall. Thank you all for your work in helping Debian continue to be "The Universal Operating System". See you next year! About Debian The Debian Project was founded in 1993 by Ian Murdock to be a truly free community project. Since then the project has grown to be one of the largest and most influential open source projects. Thousands of volunteers from all over the world work together to create and maintain Debian software. Available in 70 languages, and supporting a huge range of computer types, Debian calls itself the universal operating system. About DebConf DebConf is the Debian Project's developer conference. In addition to a full schedule of technical, social and policy talks, DebConf provides an opportunity for developers, contributors and other interested people to meet in person and work together more closely. It has taken place annually since 2000 in locations as varied as Scotland, Argentina, and Bosnia and Herzegovina. More information about DebConf is available from https://debconf.org/. About Infomaniak Infomaniak is a key player in the European cloud market and the leading developer of Web technologies in Switzerland. It aims to be an independent European alternative to the web giants and is committed to an ethical and sustainable Web that respects privacy and creates local jobs. Infomaniak develops cloud solutions (IaaS, PaaS, VPS), productivity tools for online collaboration and video and radio streaming services. About Proxmox Proxmox develops powerful, yet easy-to-use open-source server software. The product portfolio from Proxmox, including server virtualization, backup, and email security, helps companies of any size, sector, or industry to simplify their IT infrastructures. The Proxmox solutions are based on the great Debian platform, and we are happy that we can give back to the community by sponsoring DebConf23. About Siemens Siemens is technology company focused on industry, infrastructure and transport. From resource-efficient factories, resilient supply chains, smarter buildings and grids, to cleaner and more comfortable transportation, and advanced healthcare, the company creates technology with purpose adding real value for customers. By combining the real and the digital worlds, Siemens empowers its customers to transform their industries and markets, helping them to enhance the everyday of billions of people. Contact Information For further information, please visit the DebConf23 web page at https://debconf23.debconf.org/ or send mail to press@debian.org.

16 September 2023

Sergio Talens-Oliag: GitLab CI/CD Tips: Using a Common CI Repository with Assets

This post describes how to handle files that are used as assets by jobs and pipelines defined on a common gitlab-ci repository when we include those definitions from a different project.

Problem descriptionWhen a .giltlab-ci.yml file includes files from a different repository its contents are expanded and the resulting code is the same as the one generated when the included files are local to the repository. In fact, even when the remote files include other files everything works right, as they are also expanded (see the description of how included files are merged for a complete explanation), allowing us to organise the common repository as we want. As an example, suppose that we have the following script on the assets/ folder of the common repository:
dumb.sh
#!/bin/sh
echo "The script arguments are: '$@'"
If we run the following job on the common repository:
job:
  script:
    - $CI_PROJECT_DIR/assets/dumb.sh ARG1 ARG2
the output will be:
The script arguments are: 'ARG1 ARG2'
But if we run the same job from a different project that includes the same job definition the output will be different:
/scripts-23-19051/step_script: eval: line 138: d./assets/dumb.sh: not found
The problem here is that we include and expand the YAML files, but if a script wants to use other files from the common repository as an asset (configuration file, shell script, template, etc.), the execution fails if the files are not available on the project that includes the remote job definition.

SolutionsWe can solve the issue using multiple approaches, I ll describe two of them:
  • Create files using scripts
  • Download files from the common repository

Create files using scriptsOne way to dodge the issue is to generate the non YAML files from scripts included on the pipelines using HERE documents. The problem with this approach is that we have to put the content of the files inside a script on a YAML file and if it uses characters that can be replaced by the shell (remember, we are using HERE documents) we have to escape them (error prone) or encode the whole file into base64 or something similar, making maintenance harder. As an example, imagine that we want to use the dumb.sh script presented on the previous section and we want to call it from the same PATH of the main project (on the examples we are using the same folder, in practice we can create a hidden folder inside the project directory or use a PATH like /tmp/assets-$CI_JOB_ID to leave things outside the project folder and make sure that there will be no collisions if two jobs are executed on the same place (i.e. when using a ssh runner). To create the file we will use hidden jobs to write our script template and reference tags to add it to the scripts when we want to use them. Here we have a snippet that creates the file with cat:
.file_scripts:
  create_dumb_sh:
    -  
      # Create dumb.sh script
      mkdir -p "$ CI_PROJECT_DIR /assets"
      cat >"$ CI_PROJECT_DIR /assets/dumb.sh" <<EOF
      #!/bin/sh
      echo "The script arguments are: '\$@'"
      EOF
      chmod +x "$ CI_PROJECT_DIR /assets/dumb.sh"
Note that to make things work we ve added 6 spaces before the script code and escaped the dollar sign. To do the same using base64 we replace the previous snippet by this:
.file_scripts:
  create_dumb_sh:
    -  
      # Create dumb.sh script
      mkdir -p "$ CI_PROJECT_DIR /assets"
      base64 -d >"$ CI_PROJECT_DIR /assets/dumb.sh" <<EOF
      IyEvYmluL3NoCmVjaG8gIlRoZSBzY3JpcHQgYXJndW1lbnRzIGFyZTogJyRAJyIK
      EOF
      chmod +x "$ CI_PROJECT_DIR /assets/dumb.sh"
Again, we have to indent the base64 version of the file using 6 spaces (all lines of the base64 output have to be indented) and to make changes we have to decode and re-code the file manually, making it harder to maintain. With either version we just need to add a !reference before using the script, if we add the call on the first lines of the before_script we can use the downloaded file in the before_script, script or after_script sections of the job without problems:
job:
  before_script:
    - !reference [.file_scripts, create_dumb_sh]
  script:
    - $ CI_PROJECT_DIR /assets/dumb.sh ARG1 ARG2
The output of a pipeline that uses this job will be the same as the one shown in the original example:
The script arguments are: 'ARG1 ARG2'

Download the files from the common repositoryAs we ve seen the previous solution works but is not ideal as it makes the files harder to read, maintain and use. An alternative approach is to keep the assets on a directory of the common repository (in our examples we will name it assets) and prepare a YAML file that declares some variables (i.e. the URL of the templates project and the PATH where we want to download the files) and defines a script fragment to download the complete folder. Once we have the YAML file we just need to include it and add a reference to the script fragment at the beginning of the before_script of the jobs that use files from the assets directory and they will be available when needed. The following file is an example of the YAML file we just mentioned:
bootstrap.yml
variables:
  CI_TMPL_API_V4_URL: "$ CI_API_V4_URL /projects/common%2Fci-templates"
  CI_TMPL_ARCHIVE_URL: "$ CI_TMPL_API_V4_URL /repository/archive"
  CI_TMPL_ASSETS_DIR: "/tmp/assets-$ CI_JOB_ID "
.scripts_common:
  bootstrap_ci_templates:
    -  
      # Downloading assets
      echo "Downloading assets"
      mkdir -p "$CI_TMPL_ASSETS_DIR"
      wget -q -O - --header="PRIVATE-TOKEN: $CI_TMPL_READ_TOKEN" \
        "$CI_TMPL_ARCHIVE_URL?path=assets&sha=$ CI_TMPL_REF:-main "  
        tar --strip-components 2 -C "$ASSETS_DIR" -xzf -
The file defines the following variables:
  • CI_TMPL_API_V4_URL: URL of the common project, in our case we are using the project ci-templates inside the common group (note that the slash between the group and the project is escaped, that is needed to reference the project by name, if we don t like that approach we can replace the url encoded path by the project id, i.e. we could use a value like $ CI_API_V4_URL /projects/31)
  • CI_TMPL_ARCHIVE_URL: Base URL to use the gitlab API to download files from a repository, we will add the arguments path and sha to select which sub path to download and from which commit, branch or tag (we will explain later why we use the CI_TMPL_REF, for now just keep in mind that if it is not defined we will download the version of the files available on the main branch when the job is executed).
  • CI_TMPL_ASSETS_DIR: Destination of the downloaded files.
And uses variables defined in other places:
  • CI_TMPL_READ_TOKEN: token that includes the read_api scope for the common project, we need it because the tokens created by the CI/CD pipelines of other projects can t be used to access the api of the common one.We define the variable on the gitlab CI/CD variables section to be able to change it if needed (i.e. if it expires)
  • CI_TMPL_REF: branch or tag of the common repo from which to get the files (we need that to make sure we are using the right version of the files, i.e. when testing we will use a branch and on production pipelines we can use fixed tags to make sure that the assets don t change between executions unless we change the reference).We will set the value on the .gitlab-ci.yml file of the remote projects and will use the same reference when including the files to make sure that everything is coherent.
This is an example YAML file that defines a pipeline with a job that uses the script from the common repository:
pipeline.yml
include:
  - /bootstrap.yaml
stages:
  - test
dumb_job:
  stage: test
  before_script:
    - !reference [.bootstrap_ci_templates, create_dumb_sh]
  script:
    - $ CI_TMPL_ASSETS_DIR /dumb.sh ARG1 ARG2
To use it from an external project we will use the following gitlab ci configuration:
gitlab-ci.yml
include:
  - project: 'common/ci-templates'
    ref: &ciTmplRef 'main'
    file: '/pipeline.yml'
variables:
  CI_TMPL_REF: *ciTmplRef
Where we use a YAML anchor to ensure that we use the same reference when including and when assigning the value to the CI_TMPL_REF variable (as far as I know we have to pass the ref value explicitly to know which reference was used when including the file, the anchor allows us to make sure that the value is always the same in both places). The reference we use is quite important for the reproducibility of the jobs, if we don t use fixed tags or commit hashes as references each time a job that downloads the files is executed we can get different versions of them. For that reason is not a bad idea to create tags on our common repo and use them as reference on the projects or branches that we want to behave as if their CI/CD configuration was local (if we point to a fixed version of the common repo the way everything is going to work is almost the same as having the pipelines directly in our repo). But while developing pipelines using branches as references is a really useful option; it allows us to re-run the jobs that we want to test and they will download the latest versions of the asset files on the branch, speeding up the testing process. However keep in mind that the trick only works with the asset files, if we change a job or a pipeline on the YAML files restarting the job is not enough to test the new version as the restart uses the same job created with the current pipeline. To try the updated jobs we have to create a new pipeline using a new action against the repository or executing the pipeline manually.

ConclusionFor now I m using the second solution and as it is working well my guess is that I ll keep using that approach unless giltab itself provides a better or simpler way of doing the same thing.

27 August 2023

Gunnar Wolf: Interested in adopting the RPi images for Debian?

Back in June 2018, Michael Stapelberg put the Raspberry Pi image building up for adoption. He created the first set of unofficial, experimental Raspberry Pi images for Debian. I promptly answered to him, and while it took me some time to actually warp my head around Michael s work, managed to eventually do so. By December, I started pushing some updates. Not only that: I didn t think much about it in the beginning, as the needed non-free pacakge was called raspi3-firmware, but By early 2019, I had it running for all of the then-available Raspberry families (so the package was naturally renamed to raspi-firmware). I got my Raspberry Pi 4 at DebConf19 (thanks to Andy, who brought it from Cambridge), and it soon joined the happy Debian family. The images are built daily, and are available in https://raspi.debian.net. In the process, I also adopted Lars great vmdb2 image building tool, and have kept it decently up to date (yes, I m currently lagging behind, but I ll get to it soonish ). Anyway This year, I have been seriously neglecting the Raspberry builds. I have simply not had time to regularly test built images, nor to debug why the builder has not picked up building for trixie (testing). And my time availability is not going to improve any time soon. We are close to one month away from moving for six months to Paran (Argentina), where I ll be focusing on my PhD. And while I do contemplate taking my Raspberries along, I do not forsee being able to put much energy to them. So This is basically a call for adoption for the Raspberry Debian images building service. I do intend to stick around and try to help. It s not only me (although I m responsible for the build itself) we have a nice and healthy group of Debian people hanging out in the #debian-raspberrypi channel in OFTC IRC. Don t be afraid, and come ask. I hope giving this project in adoption will breathe new life into it!

25 August 2023

Ian Jackson: I cycled to all the villages in alphabetical order

This last weekend I completed a bike rides project I started during the first Covid lockdown in 2020: I ve cycled to every settlement (and radio observatory) within 20km of my house, in alphabetical order. Stir crazy In early 2020, during the first lockdown, I was going a bit stir crazy. Clare said you re going very strange, you have to go out and get some exercise . After a bit of discussion, we came up with this plan: I d visit all the local villages, in alphabetical order. Choosing the radius I decided that I would pick a round number of kilometers, as the crow flies, from my house. 20km seemed about right. 25km would have included Ely, which would have been nice, but it would have added a great many places, all of them quite distant. Software I wrote a short Rust program to process OSM data into a list of places to visit, and their distances and bearings. You can download a tarball of the alphabetical villages scanner. (I haven t published the git history because it has my house s GPS coordinates in it, and because I committed the output files from which that location can be derived.) The Rides I set off on my first ride, to Aldreth, on Sunday the 31st of May 2020. The final ride collected Yelling, on Saturday the 19th of August 2023. I did quite a few rides in June and July 2020 - more than one a week. (I d read the lockdown rules, and although some of the government messaging said you should stay near your house, that wasn t in the legislation. Of course I didn t go into any buildings or anything.) I m not much of a morning person, so I often set off after lunch. For the longer rides I would usually pack a picnic. Almost all of the rides I did just by myself. There were a handful where I had friends along: Dry Drayton, which I collected with Clare, at night. I held my bike up so the light shone at the village sign, so we could take a photo of it. Madingley, Melbourn and Meldreth, which was quite an expedition with my friend Ben. We went out as far as Royston and nearby Barley (both outside my radius and not on my list) mostly just so that my project would have visited Hertfordshire. The Hemingfords, where I had my friend Matthew along, and we had a very nice pub lunch. Girton and Wilburton, where I visited friends. Indeed, I stopped off in Wilburton on one or two other occasions. And, of course, Yelling, for which there were four of us, again with a nice lunch (in Eltisley). I had relatively little mechanical trouble. My worst ride for this was Exning: I got three punctures that day. Luckily the last one was close to home. I often would stop to take lots of photos en-route. My mum in particular appreciated all the pretty pictures. Rules I decided on these rules: I would cycle to each destination, in order, and it would count as collected if I rode both there and back. I allowed collecting multiple villages in the same outing, provided I did them in the right order. (And obviously I was allowed to pass through places out of order, without counting them.) I tried to get a picture of the village sign, where there was one. Failing that, I got a picture of something in the village with the village s name on it. I think the only one I didn t manage this for was Westley Bottom; I had to make do with the word Westley on some railway level crossing equipment. In Barway I had to make do with a planning application, stuck to a pole. I tried not to enter and leave a village by the same road, if possible. Edge cases I had to make some decisions: I decided that I would consider the project complete if I visited everywhere whose centre was within my radius. But the centre of a settlement is rather hard to define. I needed a hard criterion for my OpenStreetMap data mining: a place counted if there was any node, way or relation, with the relevant place tag, any part of which was within my ambit. That included some places that probably oughtn t to have counted, but, fine. I also decided that I wouldn t visit suburbs of Cambridge, separately from Cambridge itself. I don t consider them separate settlements, at least, not if they re conurbated with Cambridge. So that excluded Trumpington, for example. But I decided that Girton and Fen Ditton were (just) separable. Although the place where I consider Girton and Cambridge to nearly touch, is administratively well inside Girton, I chose to look at land use (on the ground, and in OSM data), rather than administrative boundaries. But I did visit both Histon and Impington, and all each of the Shelfords and Stapleford, as separate entries in my list. Mostly because otherwise I d have to decide whether to skip (say) Impington, or Histon. Whereas skipping suburbs of Cambridge in favour of Cambridge itself was an easy decision, and it also got rid of a bunch of what would have been quite short, boring, urban expeditions. I sorted all the Greats and Littles under G and L, rather than (say) Shelford, Great , which seemed like it would be cheating because then I would be able to do Shelford, Great and Shelford, Little in one go. Northstowe turned from mostly a building site into something that was arguably a settlement, during my project. It wasn t included in the output of my original data mining. Of course it s conurbated with Oakington - but happily, Northstowe inserts right before Oakington in the alphabetical list, so I decided to add it, visiting both the old and new in the same day. There are a bunch of other minor edge cases. Some villages have an outlying hamlet. Mostly I included these. There are some individual farms, which I generally didn t count. Some stats I visited 150 villages plus the Lords Bridge radio observatory. The project took 3 years and 3 months to complete. There were 96 rides, totalling about 4900km. So my mean distance was around 51km. The median distance per ride was a little higher, at around 52 km, and the median duration (including stoppages) was about 2h40. The total duration, if you add them all up, including stoppages, was about 275h, giving a mean speed including photo stops, lunches and all, of 18kph. The longest ride was 89.8km, collecting Scotland Farm, Shepreth, and Six Mile Bottom, so riding across the Cam valley. The shortest ride was 7.9km, collecting Cambridge (obviously); and I think that s the only one I did on my Brompton. The rest were all on my trusty Thorn Audax. My fastest ride (ranking by distance divided by time spent in motion) was to collect Haddenham, where I covered 46.3km in 1h39, giving an average speed in motion of 28.0kph. The most I collected in one day was 5 places: West Wickham, West Wratting, Westley Bottom, Westley Waterless, and Weston Colville. That was the day of the Wests. (There s only one East: East Hatley.) Map Here is a pretty picture of all of my tracklogs:
Edited 2023-08-25 01:32 BST to correct a slip.


comment count unavailable comments

13 August 2023

Gunnar Wolf: Back to online teaching

Mexico s education sector had one of the longest lockdowns due to COVID: As everybody, we went virtual in March 2020, and it was only by late February 2022 that I went back to teach presentially at the University. But for the semester starting next Tuesday, I m going back to a full-online mode. Why? Because me and my family will be travelling to Argentina for six months, starting this October and until next March. When I went to ask for my teaching to be frozen for two semesters, the Head of Division told me he was actually looking for teachers wanting to do distance-teaching With a student population of >380,000 students, and not being able to grow the physical infrastructure, and with such a big city as Mexico City, where a person can take ove 2hr to commute daily It only makes sense to offer part of the courses online. To be honest, I m a bit nervous about this. The past couple of days, I ve been setting up again the technological parts (i.e. spinning up a Jitsi instance, remembering my usual practices and programs). But Well, I know that being videoconference-bound, my teaching will lose the dynamism that comes from talking face to face with students. I think I will miss it! (but at the same time, I m happy to try this anew: to go virtual, but where students choosing this modality do so by choice rather than because the world forced them to)

21 July 2023

Gunnar Wolf: Road trip through mountain ridges to find the surreal

We took a couple of days of for a family vacation / road trip through the hills of Central Mexico. The overall trip does not look like anything out of the ordinary Other than the fact that Google forecasted we d take approximately 15.5 hours driving for 852Km that is, an average of almost 55 Km/h. And yes, that s what we signed up for. And that s what we got. Of course, the exact routes are not exactly what Google suggested (I can say we optimized a bit the route, i.e., by avoiding the metropolitan area of Quer taro, at the extreme west, and going via San Juan del R o / Tequisquiapan / Bernal). The first stretch of the road is just a regular, huge highway, with no particular insights. The highways leaving and entering Mexico City on the North are not fun nor beautiful, only they are needed to get nice trips going Mexico City sits at a point of changing climates. Of course, it is a huge city And I cannot imagine how it would be without all of the urbanization it now sports. But anyway: On the West, South, and part of the East, it is surrounded by high mountains, with beautiful and dense forests. Mexico City is 2200m high, and most of the valley s surrounding peaks are ~3000m (and at the South Eastern tip, our two big volcanoes, Popocat petl and Iztacc huatl, get past the 5700m mark). Towards the North, the landscape is flatter and much more dry. Industrial compounds give way to dry grasslands. Of course, central Mexico does not understand the true meaning of flat, and the landscape is full with eh-not-very-big mountains. Then, as we entered Quer taro State, we started approaching Bernal. And we saw a huge rock that looks like it is not supposed to be there! It just does not fit the surroundings. Shortly after Bernal, we entered a beautiful, although most crumpled, mountain ridge: Sierra Gorda de Quer taro. Sierra Gorda encompasses most of the North of the (quite small 11500Km total) state of Quer taro, plus portions of the neighboring states; other than the very abrupt and sharp orography, what strikes me most is the habitat diversity it encompasses. We started going up an absolute desert, harsh and beautiful; we didn t take pictures along the way as the road is difficult enough that there are almost no points for stopping for refreshments or for photo opportunities. But it is quite majestic. And if you think deserts are barren, boring places well, please do spend some time enjoying them! Anyway At on point, the road passes by a ~3100m height, and suddenly Pines! More pines! A beautiful forest! We reached our first stop at the originally mining town of Pinal de Amoles. After spending the night there and getting a much needed rest, we started a quite steep descent towards Jalpan de Serra. While it is only ~20Km away on the map, we descended from 2300 to 760 meters of altitude (and the road was over 40Km long). Being much lower, the climate drastically changed from cool and humid to quite warm and the body attitude in the kids does not lie! In the mid-18th century, Fray Jun pero Serra established five missions to evangelize the population of this very harsh territory, and the frontispiece for the church and monastery in Jalpan is quite breathtaking. But we were just passing by Jalpan. A short visit to the church and to the ice-cream shop, and we were again on our way. We crossed the state border, entering San Luis Potos , and arrived to our main destination: Xilitla, the little town in the beautiful Huasteca where the jungle meets surrealism. Xilitla was chosen by the British poet and patron of various surrealist artists https://en.wikipedia.org/wiki/Edward_James. He was a British noble (an unofficial grandson of King Edward VII), and heir to a huge fortune. I m not going to repeat here his very well known biography suffice to say that he got in love with the Huasteca, and bought a >30ha piece of jungle and mountain close to the Xilitla town, and made it his house. With very ample economic resources, in the late 1940s he started his lifelong project of building a surrealist garden. And Well, that s enough blabbering for me. I m sharing some pictures I took there. The place is plainly magic and wonderful. Edward James died in 1984, and his will decrees that after his death, the jungle should be allowed to reclaim the constructions so many structures are somewhat crumbling, and it is expected they will break down in the following decades. But for whoever comes to Mexico This magic place is definitely worth the heavy ride to the middle of the mountains and to the middle of the jungle. Xilitla now also hosts a very good museum with sculptures by Leonora Carrington, James long-time friend, but I m not going to abuse this space with even more pictures. And of course, we did more, and enjoyed more, during our three days in Xilitla. And for our way back I wanted to try a different route. We decided to come back to Mexico City crossing Hidalgo state instead of Quer taro. I had feared the roads would be in a worse shape or would be more difficult to travel And I was happy to be proven wrong! This was the longest driving stretch approximately 6:30 for 250Km. The roads are in quite decent shape, and while there are some stretches where we were quite lonely (probably the loneliest one was the sharp ascent from Tamazunchale to the detour before Orizatl n), the road felt safe and well kept at all times. The sights all across Eastern Hidalgo are breathtaking, and all furiously green (be it with really huge fern leaves or with tall, strong pines), until Zacualtip n. And just as abruptly or more as when we entered Pinal de Amoles We crossed Orizatl n, and we were in a breathtaking arid, desert-like environment again. We crossed the Barranca de Metztitl n natural reserve, and arrived to spend the night at Huasca de Ocampo. There are many more things we could have done starting at Huasca, a region where old haciendas thrived, full of natural formations, and very very interesting. But we were tired and pining to be finally back home. So we rested until mid-morning and left straight back home in Mexico City. Three hours later, we were relaxing, preparing lunch, the kids watching whatever-TV-like-things are called nowadays. All in all, a very beautiful vacation!

12 July 2023

Matt Brown: 2023 Mid Year Review

I m six months into my journey of building a business which means its time to reflect and review the goals I set for the year.

No further investment in co2mon.nz In March I made the decision to focus on completing the market research for co2mon.nz. The results of that research led to two key conclusions:
  1. Indoor air quality/ventilation is not a problem many people are actively thinking about or looking to spend money to improve.
  2. Even when introduced to the problem and educated about the need, most people are looking for a one-off expense or solution (e.g. the physical monitor) and are much less interested in a monitoring software service.
Based on that, it was clear that this is not an opportunity that I should continue pursuing and I ve put co2mon.nz into maintenance mode. I ve committed to maintaining the infrastructure to support existing customers, but I won t be investing time or energy in developing it further.

Discipline in selecting product opportunities The decision to stop investing more time into co2mon.nz was straightforward given the results of the research, but it was also painful given the time I ve already sunk into it. I hindsight it s clear that my enthusiasm to solve a problem with technology I enjoyed was my driving force rather than a deep understanding of the wants and needs of potential customers. I don t entirely regret trying my luck once - but it s not time efficient and I know that following that pattern again is not a sustainable or viable path to building a successful business. I ve decided to use the following list of questions to bring more discipline to how I evaluate product opportunities in future:
  1. Problem: Is this something that a sizeable number of people are struggling with AND are willing to spend money solving?
  2. Capability: Can I deliver a solution that solves the problem in a reliable and cost-effective way?
  3. Excitement: Am I excited and motivated to invest time in building the solution to this problem?
  4. Trust: Do I have the expertise and experience to be trusted to solve the problem by potential customers?
  5. Execution: Can I package, market and sell that solution in a profitable manner?
My plan is to answer these questions and then make an evaluation of the potential before I commit time to building any part of a product. As an example of how I think that will help, here s what I think the answer to those questions for ventilation monitoring would have been:
  1. Problem: No - as the market research eventually showed.
  2. Capability: Low - The part of the solution which customers primarily value (the hardware) is complex and outside of my core experience. The software I can easily deliver is not where the value is seen.
  3. Excitement: Yes - this was the primary driver of starting the project.
  4. Trust: Low - I m trusted to build software, but cannot claim any specific expertise in air quality and ventilation.
  5. Execution: Low confidence - These skills are not ones I ve exercised a lot in my career to date.
What these answers point to is that identifying the problem alone is not enough. I don t expect every question to have a perfect answer, but I want to hold myself to only pursuing opportunities where there s only one major area of doubt. In this case, even had the market research demonstrated a problem that many customers would pay to solve, there were still some big answers missing to the trust, capability and execution questions. Overall my conclusion is that co2mon.nz was not the ideal business to start my journey with given the number of open questions in the plan. I like to think that conclusion would also have been clear to me six months ago had I taken the time to go through this process then!

Prioritising areas of growth Applying those questions to my other product ideas results in a lot of I don t know yet answers to the problem and capability questions, further reinforcing the lesson that I need to spend more understanding if there is a problem with a viable business model attached in those areas before progressing any of those ideas. Beyond that lesson, a more interesting observation comes from the last question regarding execution. My answers to the first four questions vary between ideas, but my answers to this last question are always the same - I don t have a lot of confidence in my sales and marketing skills to sell a product. That s not a surprise. My career to date has been focused on software development and leadership, I don t have a lot of experience with sales and marketing. The opportunity to grow and develop those skills is actually a large part of my motivation for choosing the path of building my own business. But seeing that this is a common factor that will need significant investment regardless of which opportunity I pursue sends me a strong signal that I should focus on growth in this area as a priority. Following that logic through to the next step of what creating that focus would look like reveals a conflict: The nature of the mission I ve set for myself draws me to products in areas that are new to me, which means there s also a need to invest in building expertise in those areas. Again not a surprise, but the time and focus required to develop that expertise competes with time spent growing my sales and marketing skills. So I have a prioritisation problem. Solving it is going to require changing the type of product I m trying to build in the short term: I need to build a product that uses my existing expertise and strengths as much as possible, so that I can put the majority of my energy into growing the core business skills where my confidence is currently lacking. Trying to deliver meaningful improvements to a big problem in an area I don t have past experience in while also learning how to sell and market a product is biting off more than I chew right away.

Changing the goal posts With those lessons in hand I m making three changes to my 2023 Goals:
  1. Reducing the product development goal from several ideas to two. The first was co2mon.nz. The second will be drawn from my existing expertise - not one of the previously stated ideas that require me to develop expertise in a new area.
  2. Moving the consulting and product development goals to be alternatives - I expect I can achieve at most one of them this year.
  3. Reducing the publishing target for this site from at least once a week to once a month . I thought I d have more to say this year, but the words are coming very slowly to me.
Reducing scope and ambition is humbling, but that s reality. I hope it turns out to be a case of slow down and lay the foundations in order to then move faster. The good news is that I don t feel the need to make any changes to the vision, mission and strategy I m following - I think they re still the right destination and overall path for me even though the first six months has proven bumpy. I just need to be a bit more realistic on the short-term goals that will feed into them.

The next few months I m choosing to prioritise the product development goal. I m aiming to complete the market research/problem definition phase for a product opportunity I ve identified in the SRE/DevOps space (where my previous experience is) and make a decision on whether to start development by mid August. In making that decision I plan to gather the answers to my questions, and then diligently evaluate whether the opportunity is worth committing to or not. I will write more about this process in coming weeks. If I decide to proceed that gives me 2-3 months to get an MVP in the hands of customers and get concrete validation of whether the product has revenue and growth potential before the end of the year. Tight, but if things go well, and I don t take any further consulting work, there s a reasonable chance I can complete the revised goal successfully. In the event that I decide the product opportunity I m currently researching is not the right one to commit to, I will likely revert to focusing on my consulting goal in the remaining 2-3 months of the year rather than attempt a third product development iteration. Thanks for reading this far! As always, I d love your thoughts and feedback.

Appendix: Revised 2023 Goals Putting all that together, the ultimate outcome of this review (including updated progress scoring) looks like:
  1. Execute a series of successful consulting engagements, building a reputation for myself and leaving happy customers willing to provide testimonials that support a pipeline of future opportunities. Score: 3/10 - I focused entirely on co2mon.nz during April, May and June to the detriment of my pipeline of consulting work. This score is unlikely to improve given the above plan unless I decide not to commit to developing the idea I m currently investigating.
  2. Grow my product development skill set by taking two ideas (co2mon.nz, an SRE/DevOps focused product) to MVP stage with customer feedback received, and generate revenue and has growth potential from one of them. Score: 4/10 - I launched co2mon.nz and got feedback, I discovered it didn t solve a problem relevant to customers and therefore did not generate substantial revenue or growth potential. Idea number two is in still in progress.
  3. Develop and maintain a broad professional network.
    1. To build a professional relationship with at least 30 new people this year. Score: 6/10 - This is going well. On track for a 10/10 score.
    2. To publish a piece of writing on this site once a month and for many of those to generate interesting conversations and feedback. Score: 6/10 - 4 out of 6 months have featured a post meeting this goal so far.
    3. To support the growth of my local technical community by volunteering my experience and knowledge with others. Score: 5/10 - I ve given one talk and helped with SREcon23 APAC, but not as much other work in this area as I d like.

28 June 2023

Russ Allbery: Review: Translation State

Review: Translation State, by Ann Leckie
Publisher: Orbit
Copyright: June 2023
ISBN: 0-316-29024-6
Format: Kindle
Pages: 354
Translation State is a science fiction novel set in the same universe as the Imperial Radch series and Provenance. It is not truly a sequel of any of those books, but as with Provenance, it has significant spoilers for the conclusion of Ancillary Mercy. Provenance takes place earlier, but it's plot is unrelated as far as I can recall. Enea has spent much of hir adult life living with hir difficult and somewhat abusive grandmanan and, in recent years, running her household. Now, Grandmanan is dead, and the relatives who have been waiting to inherit Grandmanan's wealth are descending like a flock of vultures and treating hir like a servant. Enea can barely stand to be around them. It is therefore somewhat satisfying to watch their reactions when they discover that there is no estate. Grandmanan had been in debt and sold her family title to support herself for the rest of her life. Enea will receive an allowance and an arranged job that expects a minimum of effort. Everyone else gets nothing. It's still a wrenching dislocation from everything Enea has known, but at least sie can relax, travel, and not worry about money. Enea's new job for the Office of Diplomacy is to track down a fugitive who disappeared two hundred years earlier. The request came from the Radchaai Translators Office, the agency responsible for the treaty with the alien Presger, and was resurrected due to the upcoming conclave to renegotiate the treaty. No one truly expects Enea to find this person or any trace of them. It's a perfect quiet job to reward hir with travel and a stipend for putting up with Grandmanan all these years. This plan lasts until Enea's boredom and sense of duty get the better of hir. Enea is one of three viewpoint characters. Reet lives a quiet life in which he only rarely thinks about murdering people. He has a menial job in Rurusk Station, at least until he falls in with an ethnic club that may be a cover for more political intentions. Qven... well, Qven is something else entirely. Provenance started with some references to the Imperial Radch trilogy but then diverged into its own story. Translation State does the opposite. It starts as a cozy pseudo-detective story following Enea and a slice-of-life story following Reet, interspersed with baffling chapters from Qven, but by the end of the book the characters are hip-deep in the trilogy aftermath. It's not the direct continuation of the political question of the trilogy that I'm still partly hoping for, but it's adjacent. As you might suspect from the title, this story is about Presger Translators. Exactly how is not entirely obvious at the start, but it doesn't take long for the reader to figure it out. Leckie fills in a few gaps in the world-building and complicates (but mostly retains) the delightfully askew perspective Presger Translators have on the world. For me, though, the best part of the book was the political maneuvering once the setup is complete and all the characters are in the same place. The ending, unfortunately, dragged a little bit; the destination of the story was obvious but delayed by characters not talking to each other. I tend to find this irritating, but I know tastes differ. I was happily enjoying Translation State but thinking that it didn't suck me in as much as the original trilogy, and even started wondering if I'd elevated the Imperial Radch trilogy too high in my memory. Then an AI ship showed up and my brain immediately got fully invested in the story. I'm very happy to get whatever other stories in this universe Leckie is willing to write, but I would have been even happier if a ship appeared as more than a supporting character. To the surprise of no one who reads my reviews, I clearly have strong preferences in protagonists. This wasn't one of my favorites, but it was a solidly good book, and I will continue to read everything Ann Leckie writes. If you liked Provenance, I think you'll like this one as well. We once again get a bit more information about the aliens in this universe, and this time around we get more Radchaai politics, but the overall tone is closer to Provenance. Great powers are in play, but the focus is mostly on the smaller scale. Recommended, but of course read the Imperial Radch trilogy first. Note that Translation State uses a couple of sets of neopronouns to represent different gender systems. My brain still struggles with parsing them grammatically, but this book was good practice. It was worth the effort to watch people get annoyed at the Radchaai unwillingness to acknowledge more than one gender. Content warning: Cannibalism (Presger Translators are very strange), sexual assault. Rating: 8 out of 10

16 June 2023

John Goerzen: Using git-annex for Data Archiving

In my recent post about data archiving to removable media, I laid out the difference between backing up and archiving, and also said I d evaluate git-annex and dar. This post evaluates git-annex. The next will look at dar, and then I ll make a comparison post. What is git-annex? git-annex is a fantastic and versatile program that does well, it s one of those things that can do so much that it s a bit hard to describe. Its homepage says:
git-annex allows managing large files with git, without storing the file contents in git. It can sync, backup, and archive your data, offline and online. Checksums and encryption keep your data safe and secure. Bring the power and distributed nature of git to bear on your large files with git-annex.
I think the particularly interesting features of git-annex aren t actually included in that list. Among the features of git-annex that make it shine for this purpose, its location tracking is key. git-annex can know exactly which device has which file at which version at all times. Combined with its preferred content settings, this lets you very easily say things like: git-annex can be set to allow a configurable amount of free space to remain on a device, and it will fill it up with whatever copies are necessary up until it hits that limit. Very convenient! git-annex will store files in a folder structure that mirrors the origin folder structure, in plain files just as they were. This maximizes the ability for a future person to access the content, since it is all viewable without any special tool at all. Of course, for things like optical media, git-annex will essentially be creating what amounts to incrementals. To obtain a consistent copy of the original tree, you would still need to use git-annex to process (export) the archives. git-annex challenges In my prior post, I related some challenges with git-annex. The biggest of them quite poor performance of the directory special remote when dealing with many files has been resolved by Joey, git-annex s author! That dramatically improves the git-annex use scenario here! The fixing commit is in the source tree but not yet in a release. git-annex no doubt may still have performance challenges with repositories in the 100,000+-range, but in that order of magnitude it now looks usable. I m not sure about 1,000,000-file repositories (I haven t tested); there is a page about scalability. A few other more minor challenges remain: I worked around the timestamp issue by using the mtree-netbsd package in Debian. mtree writes out a summary of files and metadata in a tree, and can restore them. To save: mtree -c -R nlink,uid,gid,mode -p /PATH/TO/REPO -X <(echo './.git') > /tmp/spec And, after restoration, the timestamps can be applied with: mtree -t -U -e < /tmp/spec Walkthrough: initial setup To use git-annex in this way, we have to do some setup. My general approach is this: Let's get started! I've set all these shell variables appropriately for this example, and REPONAME to "testdata". We'll begin by setting up the metadata-only tracking repo.
$ REPONAME=testdata
$ mkdir "$METAREPO"
$ cd "$METAREPO"
$ git init
$ git config annex.thin true
There is a sort of complicated topic of how git-annex stores files in a repo, which varies depending on whether the data for the file is present in a given repo, and whether the file is locked or unlocked. Basically, the options I use here cause git-annex to mostly use hard links instead of symlinks or pointer files, for maximum compatibility with non-POSIX filesystems such as NTFS and UDF, which might be used on these devices. thin is part of that. Let's continue:
$ git annex init 'local hub'
init local hub ok
(recording state in git...)
$ git annex wanted . "include=* and exclude=$REPONAME/*"
wanted . ok
(recording state in git...)
In a bit, we are going to import the source data under the directory named $REPONAME (here, testdata). The wanted command says: in this repository (represented by the bare dot), the files we want are matched by the rule that says eveyrthing except what's under $REPONAME. In other words, we don't want to make an unnecessary copy here. Because I expect to use an mtree file as documented above, and it is not under $REPONAME/, it will be included. Let's just add it and tweak some things.
$ touch mtree
$ git annex add mtree
add mtree
ok
(recording state in git...)
$ git annex sync
git-annex sync will change default behavior to operate on --content in a future version of git-annex. Recommend you explicitly use --no-content (or -g) to prepare for that change. (Or you can configure annex.synccontent)
commit
[main (root-commit) 6044742] git-annex in local hub
1 file changed, 1 insertion(+)
create mode 120000 mtree
ok
$ ls -l
total 9
lrwxrwxrwx 1 jgoerzen jgoerzen 178 Jun 15 22:31 mtree -> .git/annex/objects/pX/ZJ/...
OK! We've added a file, and it got transformed into a symlink. That's the thing I said we were going to avoid, so:
git annex adjust --unlock-present
adjust
Switched to branch 'adjusted/main(unlockpresent)'
ok
$ ls -l
total 1
-rw-r--r-- 2 jgoerzen jgoerzen 0 Jun 15 22:31 mtree
You'll notice it transformed into a hard link (nlinks=2) file. Great! Now let's import the source data. For that, we'll use the directory special remote.
$ git annex initremote source type=directory directory=$SOURCEDIR importtree=yes \
encryption=none
initremote source ok
(recording state in git...)
$ git annex enableremote source directory=$SOURCEDIR
enableremote source ok
(recording state in git...)
$ git config remote.source.annex-readonly true
$ git config annex.securehashesonly true
$ git config annex.genmetadata true
$ git config annex.diskreserve 100M
$ git config remote.source.annex-tracking-branch main:$REPONAME
OK, so here we created a new remote named "source". We enabled it, and set some configuration. Most notably, that last line causes files from "source" to be imported under $REPONAME/ as we wanted earlier. Now we're ready to scan the source.
$ git annex sync
At this point, you'll see git-annex computing a hash for every file in the source directory. I can verify with du that my metadata-only repo only uses 14MB of disk space, while my source is around 4GB. Now we can see what git-annex thinks about file locations:
$ git-annex whereis less
whereis mtree (1 copy)
8aed01c5-da30-46c0-8357-1e8a94f67ed6 -- local hub [here]
ok
whereis testdata/[redacted] (0 copies)
The following untrusted locations may also have copies:
9e48387e-b096-400a-8555-a3caf5b70a64 -- [source]
failed
... many more lines ...
So remember we said we wanted mtree, but nothing under testdata, under this repo? That's exactly what we got. git-annex knows that the files under testdata can be found under the "source" special remote, but aren't in any git-annex repo -- yet. Now we'll start adding them. Walkthrough: removable drives I've set up two 500MB filesystems to represent removable drives. We'll see how git-annex works with them.
$ cd $DRIVE01
$ df -h .
Filesystem Size Used Avail Use% Mounted on
acrypt/no-backup/annexdrive01 500M 1.0M 499M 1% /acrypt/no-backup/annexdrive01
$ git clone $METAREPO
Cloning into 'testdata'...
done.
$ cd $REPONAME
$ git config annex.thin true
$ git annex init "test drive #1"
$ git annex adjust --hide-missing --unlock
adjust
Switched to branch 'adjusted/main(hidemissing-unlocked)'
ok
$ git annex sync
OK, that's the initial setup. Now let's enable the source remote and configure it the same way we did before:
$ git annex enableremote source directory=$SOURCEDIR
enableremote source ok
(recording state in git...)
$ git config remote.source.annex-readonly true
$ git config remote.source.annex-tracking-branch main:$REPONAME
$ git config annex.securehashesonly true
$ git config annex.genmetadata true
$ git config annex.diskreserve 100M
Now, we'll add the drive to a group called "driveset01" and configure what we want on it:
$ git annex group . driveset01
$ git annex wanted . '(not copies=driveset01:1)'
What this does is say: first of all, this drive is in a group named driveset01. Then, this drive wants any files for which there isn't already at least one copy in driveset01. Now let's load up some files!
$ git annex sync --content
As the messages fly by from here, you'll see it mentioning that it got mtree, and then various files from "source" -- until, that is, the filesystem had less than 100MB free, at which point it complained of no space for the rest. Exactly like we wanted! Now, we need to teach $METAREPO about $DRIVE01.
$ cd $METAREPO
$ git remote add drive01 $DRIVE01/$REPONAME
$ git annex sync drive01
git-annex sync will change default behavior to operate on --content in a future version of git-annex. Recommend you explicitly use --no-content (or -g) to prepare for that change. (Or you can configure annex.synccontent)
commit
On branch adjusted/main(unlockpresent)
nothing to commit, working tree clean
ok
merge synced/main (Merging into main...)
Updating d1d9e53..817befc
Fast-forward
(Merging into adjusted branch...)
Updating 7ccc20b..861aa60
Fast-forward
ok
pull drive01
remote: Enumerating objects: 214, done.
remote: Counting objects: 100% (214/214), done.
remote: Compressing objects: 100% (95/95), done.
remote: Total 110 (delta 6), reused 0 (delta 0), pack-reused 0
Receiving objects: 100% (110/110), 13.01 KiB 1.44 MiB/s, done.
Resolving deltas: 100% (6/6), completed with 6 local objects.
From /acrypt/no-backup/annexdrive01/testdata
* [new branch] adjusted/main(hidemissing-unlocked) -> drive01/adjusted/main(hidemissing-unlocked)
* [new branch] adjusted/main(unlockpresent) -> drive01/adjusted/main(unlockpresent)
* [new branch] git-annex -> drive01/git-annex
* [new branch] main -> drive01/main
* [new branch] synced/main -> drive01/synced/main
ok
OK! This step is important, because drive01 and drive02 (which we'll set up shortly) won't necessarily be able to reach each other directly, due to not being plugged in simultaneously. Our $METAREPO, however, will know all about where every file is, so that the "wanted" settings can be correctly resolved. Let's see what things look like now:
$ git annex whereis less
whereis mtree (2 copies)
8aed01c5-da30-46c0-8357-1e8a94f67ed6 -- local hub [here]
b46fc85c-c68e-4093-a66e-19dc99a7d5e7 -- test drive #1 [drive01]
ok
whereis testdata/[redacted] (1 copy)
b46fc85c-c68e-4093-a66e-19dc99a7d5e7 -- test drive #1 [drive01]
The following untrusted locations may also have copies:
9e48387e-b096-400a-8555-a3caf5b70a64 -- [source]
ok
If I scroll down a bit, I'll see the files past the 400MB mark that didn't make it onto drive01. Let's add another example drive! Walkthrough: Adding a second drive The steps for $DRIVE02 are the same as we did before, just with drive02 instead of drive01, so I'll omit listing it all a second time. Now look at this excerpt from whereis:
whereis testdata/[redacted] (1 copy)
b46fc85c-c68e-4093-a66e-19dc99a7d5e7 -- test drive #1 [drive01]
The following untrusted locations may also have copies:
9e48387e-b096-400a-8555-a3caf5b70a64 -- [source]
ok
whereis testdata/[redacted] (1 copy)
c4540343-e3b5-4148-af46-3f612adda506 -- test drive #2 [drive02]
The following untrusted locations may also have copies:
9e48387e-b096-400a-8555-a3caf5b70a64 -- [source]
ok
Look at that! Some files on drive01, some on drive02, some neither place. Perfect! Walkthrough: Updates So I've made some changes in the source directory: moved a file, added another, and deleted one. All of these were copied to drive01 above. How do we handle this? First, we update the metadata repo:
$ cd $METAREPO
$ git annex sync
$ git annex dropunused all
OK, this has scanned $SOURCEDIR and noted changes. Let's see what whereis says:
$ git annex whereis less
...
whereis testdata/cp (0 copies)
The following untrusted locations may also have copies:
9e48387e-b096-400a-8555-a3caf5b70a64 -- [source]
failed
whereis testdata/file01-unchanged (1 copy)
b46fc85c-c68e-4093-a66e-19dc99a7d5e7 -- test drive #1 [drive01]
The following untrusted locations may also have copies:
9e48387e-b096-400a-8555-a3caf5b70a64 -- [source]
ok
So this looks right. The file I added was a copy of /bin/cp. I moved another file to one named file01-unchanged. Notice that it realized this was a rename and that the data still exists on drive01. Well, let's update drive01.
$ cd $DRIVE01/$REPONAME
$ git annex sync --content
Looking at the testdata/ directory now, I see that file01-unchanged has been renamed, the deleted file is gone, but cp isn't yet here -- probably due to space issues; as it's new, it's undefined whether it or some other file would fill up free space. Let's work along a few more commands.
$ git annex get --auto
$ git annex drop --auto
$ git annex dropunused all
And now, let's make sure metarepo is updated with its state.
$ cd $METAREPO
$ git annex sync
We could do the same for drive02. This is how we would proceed with every update. Walkthrough: Restoration Now, we have bare files at reasonable locations in drive01 and drive02. But, to generate a consistent restore, we need to be able to actually do an export. Otherwise, we may have files with old names, duplicate files, etc. Let's assume that we lost our source and metadata repos and have to restore from scratch. We'll make a new $RESTOREDIR. We'll begin with drive01 since we used it most recently.
$ mv $METAREPO $METAREPO.disabled
$ mv $SOURCEDIR $SOURCEDIR.disabled
$ git clone $DRIVE01/$REPONAME $RESTOREDIR
$ cd $RESTOREDIR
$ git config annex.thin true
$ git annex init "restore"
$ git annex adjust --hide-missing --unlock
Now, we need to connect the drive01 and pull the files from it.
$ git remote add drive01 $DRIVE01/$REPONAME
$ git annex sync --content
Now, repeat with drive02:
$ git remote add drive02 $DRIVE02/$REPONAME
$ git annex sync --content
Now we've got all our content back! Here's what whereis looks like:
whereis testdata/file01-unchanged (3 copies)
3d663d0f-1a69-4943-8eb1-f4fe22dc4349 -- restore [here]
9e48387e-b096-400a-8555-a3caf5b70a64 -- source
b46fc85c-c68e-4093-a66e-19dc99a7d5e7 -- test drive #1 [origin]
ok
...
I was a little surprised that drive01 didn't seem to know what was on drive02. Perhaps that could have been remedied by adding more remotes there? I'm not entirely sure; I'd thought would have been able to do that automatically. Conclusions I think I have demonstrated two things: First, git-annex is indeed an extremely powerful tool. I have only scratched the surface here. The location tracking is a neat feature, and being able to just access the data as plain files if all else fails is nice for future users. Secondly, it is also a complex tool and difficult to get right for this purpose (I think much easier for some other purposes). For someone that doesn't live and breathe git-annex, it can be hard to get right. In fact, I'm not entirely sure I got it right here. Why didn't drive02 know what files were on drive01 and vice-versa? I don't know, and that reflects some kind of misunderstanding on my part about how metadata is synced; perhaps more care needs to be taken in restore, or done in a different order, than I proposed. I initially tried to do a restore by using git annex export to a directory special remote with exporttree=yes, but I couldn't ever get it to actually do anything, and I don't know why. These two cut against each other. On the one hand, the raw accessibility of the data to someone with no computer skills is unmatched. On the other hand, I'm not certain I have the skill to always prepare the discs properly, or to do a proper consistent restore.

Next.

Previous.